National Cyber Warfare Foundation (NCWF)

TDL 018 | How To Think, Not What To Think | Mitch Prior


0 user ratings
2026-03-20 21:26:04
milo
Blue Team (CND)


The Human Algorithm in a Zero-Trust World


In the latest episode of The Defender’s Log, host David Redekop sits down with cybersecurity expert Mitch Prior to discuss the intersection of high-tech security and human intuition. From their first meeting in 2018—the early days of Zero Trust—the duo explores why the “why” behind technical thinking matters more than the letters after your name.


Experience Over Certifications


While certifications serve as a “compression algorithm” for recruiters, Mitch argues they often lack fidelity. In a field that moves this fast, the ability to think independently is the ultimate security asset.


“If you outsource your cognitive abilities, they will atrophy.” — Mitch Prior


Privacy as a Human Right


Mitch’s “default-deny” philosophy isn’t just for firewalls; it’s a lifestyle. He advocates for:



  • Sovereign Data Custody: Keeping data local whenever possible.

  • AI Wrangling: Using AI to extract signal from noise (like identifying true security threats in camera feeds) without sacrificing privacy to the cloud.

  • Verification: Transitioning from blind trust to a “verify everything” posture.


The Future: Human-Centric Tech


As AI begins to “audit” the world’s code at lightning speed, David and Mitch conclude that our best defense is a return to fundamentals. Whether it’s protecting a network or raising the next generation, success lies in human connection and understanding the mechanics of how we think.


Full episode of The Defender’s Log here:


How To Think, Not What To Think | Mitch Prior | The Defender's Log


TL;DR



  • Trust vs. Verification: David and Mitch prioritize “Zero Trust.” Don’t assume a device is safe; verify every action it takes on your network.

  • Default-Deny: A critical security posture. Block all traffic by default and only allow what is proven necessary. This stops “phone home” malware from IoT devices.

  • Experience Over Labels: Certifications are just “compression algorithms.” Real value lies in how a person thinks and their hands-on experience, not just their credentials.

  • Local Data Custody: Keep data on-premise whenever possible. Relying on “someone else’s computer” (the cloud) leads to complacency and loss of control.

  • AI as a Tool: Use AI to filter “signal from noise”—like using local computer vision to distinguish a real security threat from a blowing leaf.

  • Cognitive Atrophy: Don’t let tools replace thinking. Use technology to handle basic tasks so your brain can solve more complex problems.

  • The Human Connection: In a world of screens, the most valuable “algorithm” is face-to-face time and teaching the next generation how to think independently.




Links


View it on YouTube: https://www.youtube.com/watch?v=KaLttwpEXpo


Listen to the episode on your favourite podcast platform:


Apple

https://podcasts.apple.com/us/podcast/how-to-think-not-what-to-think-raising-resilient-minds/id1829031081?i=1000756344390


Spotify

https://open.spotify.com/show/2w0boRxseLgZxqVew8n2g5


Amazon Music

https://music.amazon.ca/podcasts/d7aa9a19-d092-42a6-9fe9-9e8d81f68d30/episodes/24b2c1f9-430c-477f-8735-a47a71d3330f/the-defender%E2%80%99s-log-podcast-how-to-think-not-what-to-think-raising-resilient-minds-in-a-tech-driven-world


ADAMnetworks

https://adamnet.works




The Defender’s Log Full Transcript -Episode 018


[00:00] Announcer: Deep in the digital shadows where threats hide behind any random bite. A fearless crew of cybersecurity warriors guards the line between chaos and order their epic battles rarely spoken of until today. Welcome to the Defender’s Log, where we crack open the secrets of top security chiefs, CISOs, and Architects who faced the abyss and won. Here’s your host, David Redekop.


[00:25] David Redekop: Welcome back to the Defender’s Log. This is episode number 18, and I actually invited a friend of mine, Mitch Prior. Good to have you, Mitch.


[00:35] Mitch Prior: Thanks for having me, David.


[00:37] David Redekop: I remember meeting you the first time somewhere around 2018, if I’m not mistaken. It was a hotel room in Red Deer where we did a conference presentation, which back then was early days of the notion of Zero trust. Do you remember that?


[00:53] Mitch Prior: I do. Yeah. It was the first time I realized there’s actually something we can do about the problems we have. We can get our devices and people protected and it was an exciting day for me.


[01:05] David Redekop: You know how it is. I’m sure you have this with a few people as guys, when you’re, when you reach my age, Mitch, you don’t have a large number of friends. You have a very large number of people that you’ve known and you’ve got a smaller number of friends, and those you can typically pinpoint to the point of meeting where within a matter of minutes. The choice of words, the tone, the eye contact, all collectively said, “Hey, this is my kind of guy.” That’s what I experienced with you anyway.


[01:37] Mitch Prior: Yeah, it happens all the time. And I see some similarities. It’s a heuristic after all. We can’t make all these decisions so quickly. So we need to rely on some sort of algorithm and usually I think humans are pretty good at doing that.


[01:52] David Redekop: Right. Someone told me that the human eye, when we look outward, we have about a billion bits of information that actually enter our eyes of which we are actually cognitively processing. 10 of the billion and everything else is heuristics by the brain that we have very little control over because it’s about the collective experience in life where it automatically goes down neural pathways to say, this guy’s good, this guy’s safe. You’re gonna have enough to learn from each other. And my enjoyment around finding like-minded people in cyber especially, has been finding people where there is a mutual exchange of value, meaning that this is a person I can learn from. And nowadays it’s people that are younger that have learned faster because there’s been more information available to them that they haven’t had to research that they’ve just been able to absorb. And then there’s value that I can exchange. So if there is a mutual increase of value with each other, then that’s an automatic match. Anyway, that’s what I remember feeling that day in Red Deer, Alberta.


[02:54] Mitch Prior: Absolutely. Yeah. Yeah, I remember that too. And it’s rare that you can connect or, for me, it has been rare that I can connect on that level with many people. And when it is, when it does happen, it’s a special thing. And you gotta treat those people right and hold 'em close and look for those connections wherever you can because you don’t know how you’re gonna be able to help each other out, down the road.


[03:19] David Redekop: Right, right. And in our industry we are probably represented in some ways, like others are where the amount of certificates and amount of certifications and number of letters after your name means some things to some people, but to others, it’s about experience. And the reason those certificates mean something is because it’s a quick way to judge whether you’re a person that is going to follow a path through to completion. There is some value in that. I have never dissed anybody with certifications, but by the same token, when you have an opportunity to get to know someone, to see why they think the way the way they do instead of what they think, then the why behind the thinking actually gives you insight into: How adaptive is this person to cyber? How do they think about the security posture? And so I have never once asked you about any certifications, but, and I don’t care if you have any, do you have any?


[04:19] Mitch Prior: No, no, I don’t. And I think they serve a role, but I don’t think we should rely on them entirely. It’s a, it’s like a compression algorithm. It’s gonna show us real quick if this is worth further consideration but it’s not always super accurate and we’re gonna lose some fidelity. Somebody who, if you’re looking for credentials specifically in certifications, you’re gonna skip somebody like myself who could probably give you some value, but if you look further into what their experience and how they think, then you could probably find out a whole lot more about them than any certification could tell you. But it’s hard to dedicate all the time to that for a lot of people looking for help. So it’s a double-edged sword, I’d say.


[05:01] David Redekop: Yeah. And I wonder if that is one of the reasons why so many folks that years ago gravitated towards technology in general went down the path because you could actually access information, more information today than yesterday, and there will be more information available tomorrow. And it’s probably the field that ended up making education free first before any others. And many fields ended up becoming more academically formalized. I remember when accounting back in the 70s, 80s, I believe you could get an accounting certification without a university degree because it was extremely structured and you could put people through an exam similar to what you could do as an actuarial scientist. You could write the actuarial exams and then end up getting your designation through there. But bit by bit they became standardized academically, but interestingly enough in IT and in security in many ways that was an area that, you know, we could continue to grow in and actually demonstrate that, look, we can deliver a security posture by just doing all the fundamentals right. And it’s because we had time on our side, we had information on our side. And so I’m not sure where this is all going industry-wise, but I am definitely focused on our children. To make sure that they focus on how to think instead of what to think. And the more independent they are, you know, the more likely they are to land on the “how to think right” side of the equation. How are you thinking about those things?


[06:46] Mitch Prior: I’m thinking that’s number one. And I think it’s always been important and now probably more than ever because these tools that we have, at the dawn of the pocket calculator, there’s quite the scare of everyone’s gonna forget how to do math and we will get stupid because we’ll just focus on using these calculators. Nobody’s gonna know how to do multiplication or division and that is one way you could go with it if you overuse it and abuse it, and that’s what you use when you’re studying. That’s what you use on your tests. You can get through and you can convince somebody that you know what you’re doing. But when the rubber hits the road, you’re not gonna be as well acquainted with the concepts because they haven’t gotten in your brain and they haven’t stuck. So now we have that to such a greater degree across all domains and all different ways of thinking. It’s not just a calculator solving your multiplication tables anymore. It’s something you can go to with pretty much any question and get an average answer. And, if we’re not careful with that, we outsource our cognitive abilities, then, we’ll atrophy them. So using these tools like the calculator, all of a sudden when you don’t have to do really long multiplication by hand on a long sheet of paper, as long as you know how to do that multiplication, you can do it if you need to. Now you can spend your time and your brain cycles doing more complex algebra, more complex mathematical functions when you have that solid base, but you need that solid base, and if you cheat that solid base, you’re only cheating yourself.


[08:24] David Redekop: Yeah. You know, just this morning on my way to drive one of our sons to school, he was asking about, “So what are the important things?” And I said, “You know, in the olden days we talked about the three Rs that are the fundamentals: Reading, Writing, and Arithmetic.” And of course he rolled his eyes at this. It’s been so long since we used that in homeschool environments. The three Rs that he’s like, “That’s only one R.” But the fundamentals are still appropriate. That as long as we stick with the basic building blocks, then we’re less likely to be corrupted and less likely to be corruptible. So, that’s one of the things I appreciate about you, is that if I make a claim towards you about a security issue that we found, you will validate it and you will accept it on the surface as, “Okay, here’s an idea that David brought to me. Now let me validate it,” because if you don’t validate, then there is an element of trust, but that wasn’t verified, right? And so the beautiful thing about technology is that we should always be able to verify it. And so I did appreciate that about you. But before we go any further, how did you get your start? How did you even get interested? Like what age did you develop an interest in tech in general?


[09:44] Mitch Prior: I was pretty young. I was pulling apart the family computer by probably 11 or 12 or something, and almost got it back together. And that was a really good learning experience. And then mom brought in her friend who knew a lot more. He was in the industry and that was a huge learning experience. And then next time I pulled it apart, I didn’t need him. And then it kind of took off from there. I had just a lot of access to computers and tech and I got old things. I found it fascinating and I just wanted to understand how it all worked. And there was just a lot to know. And then, it started crossing over into a lot of other interests as I grew up and developed; they all kind of came back to the computers. So gone into electronics and microcontrollers and those are just computers. And now I can use electricity to impact the real world and read sensors and move an actuator, whatever it is, and all of my interests really revolved around computers. And then as far as the start, that got me on the path where I am now, it would’ve been about the end of grade 10 when I realized that I knew quite a bit and people were willing to pay me for that. So I skipped as many classes as I could and I worked, and I was managing some of the school’s IT infrastructure in high school and just kept working and working and it went from there.


[11:15] David Redekop: Very good and never looked back. Always enjoyed it.


[11:15] Mitch Prior: Yeah. There there’s been frustrating moments for sure. It’s something that it’s tough to always find the balance and when you’re in it all the time, I definitely had periods of my life when I’m like, “Do I really want to keep working on a computer every day?” And there’s been periods of my life where I’ve taken a step back, but there’s so much good that can come of it. And I think, once you find that balance—and that’s a personal decision for everybody—I think that it’s a very fulfilling way to spend your time and help others.


[11:53] David Redekop: So at some point, or at which point I should say, did it become obvious to you that the use of technology, connectivity, social media, was really going to be pushing against our very strong need for privacy and security? At what point did that become obvious to you?


[12:12] Mitch Prior: I think it was pretty early on. I mean, I remember doing social studies reports on some of the unclassified documents that came out about the scope of privacy and surveillance that was happening at that time. And it really moved me because I think it’s a basic human right. I mean, it makes sense to me. It’s logical to me that we should be able to be in control of our own privacy, our own data. Because if we’re not, then who is?


[12:47] David Redekop: Yeah, there is no question. In fact, when I first started chatting with you about joining me as a guest here on the Defender’s Log, I’m pretty sure that your very first thought was, “No, I don’t wanna be on video. No, I don’t want my voice to be a matter of public record,” which I don’t blame you for. But how did you change your mind about joining me after all?


[13:13] Mitch Prior: Yeah. This would be my first time doing anything like this, and I have been very hesitant to do things like this in the past. I’ve had a lot of ideas of things that I could do that could bring value to people and help people and share the information and the hard lessons that I’ve fought to learn. And then looking back and seeing that so much of what I know and so much of what I’m capable of is directly because of people like you who are willing to have a public presence and teach others and make that content available, it really became a balance. Obviously there’s pros and cons to both and I realized that balance is probably different than I used to think of it when I really analyzed it because the value that I might be able to provide, it’s better to bring it out. The risk-reward ratio is at the point where I’m sure I can provide more benefit than risk.


[14:13] David Redekop: Yeah. Especially in the world of AI where so many white collar careers are in jeopardy, where people are concerned about whether or not they’re gonna have a job tomorrow, the folks that will absolutely be in need are what our Francois—you know him—calls the “AI wranglers.” So those that understand how to put a tool to use, that everyone has some degree of concern about runaway AI powers, whether or not constrained. And so the idea of deploying that kind of capability, but a hundred percent within your domain, a hundred percent within your control seems to be right up your alley. So, I would definitely endorse you for that kind of a role. I don’t even know if that’s what you’re offering to do to your customer base. But if someone were to hire you… In fact, I’m right now thinking about somebody that was just asking me this morning, “David, would you do this for me? I need this and this and this, but it needs to be contained.” I’m like, “Yeah, you need an AI wrangler and no, I’m not available.” So I’m gonna put you in touch with her. One of the things that I’ve observed about you is that unlike some of us who were very excited when the cloud first became a reality—like “shift everything to the cloud”—and then we started to realize, wait a minute, that was not necessary. Somebody else’s computer.


[15:36] Mitch Prior: Yes, someone else’s computer with more expensive electricity, but it’s more reliable. So, you know, there’s a cost-benefit analysis there.


[15:47] David Redekop: But in the end, when it comes to data, sovereign data custody is an important value to our organization. And I would say to many people’s personal lives, you never went that route as far as I can tell. You stayed on the conservative side of local data custody for everything that could be done local.


[16:09] Mitch Prior: Wherever I could, yeah. I have a couple cloud wireground servers that just kind of connect things together. But other than that, I try and keep everything local. I haven’t had a need even when it’s a lot harder. I try to, and I think it’s based on principle more than—sometimes I really should just do it, but I haven’t, and that’s made me actually pretty good at making it work. And I’m very aware of the difficulties with that, with all sorts of different software stacks and use cases, and I’m making it work so far. It’s working pretty good. Everything’s always a trade and whenever you make a decision, it’s not always very obvious what the trade is. While it might make sense in some cases just to spin up that cloud server and put it up there—it’s secure enough, the data’s not that valuable, or “I have nothing to hide,” as the argument often goes—businesses just have to serve the bottom line. And I understand that, but when we get used to that as IT professionals, when we get used to that as individuals, we start to get complacent. And the complacency, I think, is what I’ve been trying to stay away from myself.


[17:35] David Redekop: Yeah, and I know we’ve kind of painted a very broad brush stroke. I know that the world of commerce today wouldn’t operate without the cloud. I’m not burying my head in the sand. It’s just being intelligent about what stays on-prem and what should be on-prem versus what is and should be in the cloud. Being very purposeful about every data point. I know very few people anymore for whom it makes sense to keep a local mail server. It still makes sense for some.


[18:03] Mitch Prior: Mail’s a really tough one.


[18:05] David Redekop: Right? And or your smartphone photos, you know, to not have them synchronized to the cloud. You do run the risk of your phone crashing and smashing the screen and then not being able to access storage, and then your photos are worth more in the cloud and surviving than on device and not surviving. I get it.


[18:24] Mitch Prior: There’s solutions to that though, and I think a lot of the challenges are that they’re not always really well polished. It’s hard to complain with Apple or Google Drive where it just happens automatically at the flick of a switch. Sure we can have WiFi syncing to a NAS somewhere on the network, which I do. But I gotta say it’s not very user friendly. But there’s nothing stopping one of the big players—you know, Apple could very easily do that. And if they wanted to sell NAS, I’m sure everybody would buy a little hard drive box for their network, and there’s nothing technically stopping it. It’s just the momentum of the industry.


[19:07] David Redekop: Mitch, I know that when it comes to your defensive posture relating to network security that you and I saw eye to eye literally from day one. What is your thinking around the need for a default-deny all for networks that you are responsible to manage and maintain?


[19:25] Mitch Prior: Well, it’s a lot of peace of mind. Let the guys sleep at night because it’s impossible to stay on top of all the modern expectations of having easy connections and setting up Internet of Things and little boxes and sensors and everything everywhere. Managing that is a pretty difficult thing. So it’s another balance that in the past has been really hard to strike with a default denial. So really glad to have a proper solution to that in the back pocket; without it, everything’s open.


[20:07] David Redekop: We had one of our own guys trying to reproduce a problem that we were seeing in some cases, and he’s like, “I had no idea my light bulb switch was even on my WiFi.” You know, because as you go through your device listings, once something is on the network and having that level of visibility… it never used to be important outside of an enterprise network. Of course, enterprise guys have had this since day one. And then small businesses would just get like a small business consumer-grade router. As soon as you plug something in, it just works. And that was a feature, like, I remember when the Linksys routers first came out—I don’t know, 25, 30 years ago, maybe close to it—there were these tiny little NAT gateways where all of a sudden you could hook up your cable connection. I was… this was right before you were born, Mitch, that we already had this technology. I played with those boxes, but back then there wasn’t much to worry about. They were amazing that this tech worked and you know, bad things happening… those were just stories that never really happened to anybody you knew. Isn’t it amazing how a feature becomes a bug because the feature about something automatically going online actually became a bug that everything’s going online?


[21:30] Mitch Prior: The technology moves so fast in some ways, and then in other ways it’s pretty slow to stabilize until we really realize what we’ve made and how it all fits together and what the implications of it all are.


[21:44] David Redekop: All right. I mean, I absolutely admire the fact that we live today with the resilience of the internet. Some days I am just in shock that it even works. And then if I start peeling back the layers, going back to the original ARPANET and the amount of innovation that the guys went through in the early days is phenomenal. The thinking was about: how do we survive disruption? And at the time it was really a military effort to make sure that connectivity would remain because of course, once communication gets severed, then you are at a disadvantage from a military point of view. And at the time there was no consideration about how the adversary will ever weaponize this survivability and this resilience. And of course, today that is exactly what they do is they know there’s always another way that we can attack this. There’s another way that may not have been considered. And so it’s phenomenal when you take a look at the latest threats. I was just looking at a researcher’s report this morning about how steganography was being abused with images that now live on archive.org, which have C2 references in them. And so no one’s gonna take down archive.org, right? It’s not gonna be a domain that will be seized.


[23:09] Mitch Prior: Yeah. And likewise, we’re seeing threats where markers are put into blockchains of all sorts that are not going to be removable ever. And I remember when IPFS first came out… the IP Internet Protocol File System, IPFS. I have no idea where that’s at today, but the big feature was that nothing could ever be removed. And my very first question was, “So if you have a son or daughter and they’re found in a compromised position and there’s some child sexual abuse material that gets posted there, you mean it’s never removable?” So we have built a very, very precarious thing with this thing called the internet that we really need to think things through. And the conservative way of thinking about this in my mind has always been default, deny, default, deny. I remember, a quick detour, when I was young, I still went through this where our people, Mennonites, were not supposed to live in pride. And if you put rubber tires on your tractor, it was a sign of pride. So there was this aspect of default deny, you know, deny all the new technology. And even though sometimes that felt like it was an abused mechanism, the overall concept of default deny until you prove that there’s value actually in many instances—most instances when it comes to technology—is a good one. How do you think about those things?


[24:50] Mitch Prior: Yeah, well, it definitely gives you some time to slow down, think about it, and build the full picture. It’s kind of antithetical to technology and the culture because the rapid pace of advancement is expected and default-deny has historically been really difficult to practice because, well, we just need this thing to work. We got this new gadget, we have the new sensor or actuator or backup system, cameras, whatever it is, and we just need it online now and we don’t wanna pay for a bunch of research and configuration to make it happen. So historically, it’s been not really practical. So we’re at a different point now where we have these tools available to us. The tools have finally caught up. The tools that were thought impossible not that long ago are here and now we have a fighting chance to keep everything safe.


[25:52] David Redekop: Yeah. My very first step whenever the kids or the family need any new tech: of course we join it to the network and immediately our first thing we do is we watch to see what it’s trying to do, you know? And to have that visibility is actually a fun process for them as much as it is for me, because we’re always thinking, “Okay, does it need to phone home? Where does the phone home to? Does it phone home to China? Does it phone home to Amazon? Does it phone home to Google? Like who is the cloud provider of this particular vendor? And how good is their security?” And so there’s all kinds of things that you find out with the default-denial that actually makes it enjoyable because you’re now informing yourself at the same time and the friction doesn’t need to be a lot. And same thing in a business environment. In fact, we found a hijacked Xerox multifunction unit that a client of ours bought at an auction sale, got a good deal on it and they hooked it up to the ethernet because of course you’re gonna put a multifunction unit on the ethernet network so that you can print to it and scan from it and so forth. And it was behaving in strange ways. Turns out that the firmware on it was malware laced, and you would never know this unless you put it on a network and then you observe what it’s doing. If it had unrestricted outbound access, who knows what it would’ve done over time. So it’s very strange things that you uncover in this process. And it also means that you are immune from these very rare attacks that are in places like we had with SolarWinds/SolarGate, right? Where there would be infected devices all over the world that would phone home to an attacker not often, just once in a while. Well, if that wasn’t a known good reputation domain name or one that you allowed listed, then you know, you were never at risk. So, we just see these stories over and over again where the decision today ends up benefiting you at some point in the future. And you may not even know when.


[28:06] Mitch Prior: Yes. Well, especially with the current trends in AI that we’re seeing, we need to take this super defensive posture because it was already impossible to stay up to date with all the latest and greatest vulnerabilities and exploits that were available. Now the pace is rapid because the tool’s really effective. AI is a really effective tool. It’s really good at reading code. I mean, we’re finding so much audited code that is everywhere on the internet, in secure places that is considered to be safe and relatively bug free, and the latest and greatest models have poked in just the right way to uncover a whole lot of vulnerability. So the reactive stance was never a good one and it just won’t do anymore.


[29:00] David Redekop: Yeah. I did a HackerOne report submission today, or I should say I started one. And the CVSS score wasn’t high enough for them to admit it. And I thought, “Okay, what do I do?” So, I take my entire report and markdown and I put it into Gemini and I say, “How can I optimize this?” And it was unbelievable what it came back with and said, “Oh, that’s a very clean script. You gave us an example, but here’s why you don’t wanna submit it as is.” And it was just like… I’m just dumbfounded at what we’re able to do today with AI when properly wrangled when it’s within your own policy. And we have a very strict policy here about which tool we can use for which purpose depending on the classification of the data that we’re working on and so forth. So Mitch, let’s switch to something that you’ve done and built recently. Tell me about the funnest project you’re good with telling the world about that you’ve done recently.


[30:05] Mitch Prior: Well, I’d say one of the funner ones is a little bit not that exciting to most, but I think it gives a really solid upgrade to a very commonly used product: security cameras. They’re everywhere and they really like to alert you when motion happens and they create massive lists that everybody just ignores of motion events. So I’ve been working pretty hard at finding ways to get AI in on that system of looking through all of those detection events and categorizing them, classifying them, describing them, and then automatically determining if this is a real alert-worthy notification that should be sent out or not to the relevant people. Because in the past—and it kind of parallels with a reactive versus a proactive security stance—even with motion alerting, security systems have generally been reactive. After something happens, we’ll go check the footage, we have the footage, but in many cases, they could be with a little more intelligence more of a proactive, and you can know as soon as something happens. But I found in many of my installations and customer installations that none of these NVRs were really doing a good job at parsing alert-worthy events through lots of false positives, and inevitably they’re ignored unless you’re hiring somebody full-time to watch the video streams. So now with AI as good as it is, we can run all of this on our own local servers. We’re not making any cloud calls whatsoever. Computer vision algorithms will take the first job of picking through the video feed and if it detects something like human or a car, then we’ll put it down to the next stage of the pipeline and once we have a positive alert for one of those detection events, then we will look into it and see “What is this actually? Is this a car that we recognize? How sure are we of that? Can we see the license plate to get that really high confidence score and then we don’t have to alert about it?” Or did we see this license plate and we definitely don’t know about it, in which case it might be a very alertable event. But maybe only during certain times of the day, certain cameras, certain thresholds. The computer vision models now are very surprising; what they can run on a low-end gaming graphics card to describe what’s happening in a frame. And you can put that through some pretty simple heuristics to further filter out false positives. And it’s been a really fun project and I think it’s a great application of AI, but it’s very tempting to throw AI at every problem and it sure isn’t good at most of them. It takes a lot of fine tuning and specificity. The non-deterministic nature is really something to wrestle with, and it kind of feels unnatural to non-determinism. It’s how it works and it’s how it does what it does. But from a technical background, that’s not very common. Everything has been deterministic thus far, and now all of a sudden we have to wrestle with something that will give us two different outputs for the same… Wow.


[33:14] David Redekop: I have another friend that I’m introducing you to now, so that’s two so far, and I was not expecting to get into this call finding people to send to you, Mitch, but that’s the case. He runs a company that does time-lapse photography for construction sites, right? And of course, traditionally the value was that you would have a record of your build and you’d be able to showcase progress, feature it on a website, do all the modern type stuff. “Here’s your five-second project that took five years to build” or whatever, right? We’re compressing things in order to make them palatable on social media and whatnot. Anyway, so you’re gonna get another introduction because I think there’s a huge amount of opportunity in that space. But then the whole thought process of automating what AI can automate and producing more signal out of the noise is something that is extremely broad and that’s something I would encourage anybody who listens: if you know someone that is thinking about the future of what they wanna apply meaningful work to, the meaningful work is extracting signal from the noise because the noise is growing at an exponential rate, but not the signal at the same time. And so people who can find that out… or finding out the arbitrage between human emotional reactions versus a hundred percent confirmed digital data. I think there’s a whole future there just for someone that can understand the aspects of that. And that could be literally in any kind of event monitoring. It could be in the world of finance, in the world of insurance… the traditional industries are all ripe for someone that can understand where the gaps are and where the gaps will remain because humans are going to remain the most valuable aspect on the planet. There is nothing more valuable than the human heart on the planet, and so if AI has unlimited power, then your scarcity isn’t gonna be applied to AI power. The scarcity will be to what humans can do. So finding that gap is something that sounds like you’re already doing completely naturally.


[35:36] Mitch Prior: Yeah. It’s a tough one to find, and it’s a tough one to deal with because as amazing as all these tools are and how much information is out about them, we still really have no idea how they work. Then with that limited knowledge, trying to apply them into actually useful, actually reliable production systems is a serious challenge. It’s so impressive in a chat box interface, but then getting that into a reliable pipeline is a lot of fun. It’s a challenge.


[36:12] David Redekop: Yeah. I’m just thinking… you got me thinking now about all of the aspects of life where we have a delayed emotional reaction, and in that delay, if there is a technical acceleration of information before you allow an emotional response to make the wrong decision, that is where there’s that arbitrage opportunity.


[36:34] Mitch Prior: Absolutely. Yeah. And it’s quite the parallel because practicing that as an individual, that’s where you gain a lot of wisdom, a lot of capability and agency.


[36:45] David Redekop: Yeah. How do we do that, Mitch? How do we make sure that the next generation has enough agency in order to be able to think about the thinking? I feel like there’s so much working against the next generation. In the absence of parents dedicating one-on-one personal attention—and I’m looking at myself, I was at work too much when my kids were little; try to make up for it now—you’re not as advanced in family building as I am Mitch, so every moment you get—I’m sure this is already obvious to you—but that one-on-one… there’s an amazing amount of cognitive development that happens when a child is present with a mom and or dad that I wouldn’t want anybody to miss out on. So, yeah, it is super, super critical. So maybe we can do something with AI and technology. I’m just spitballing now. But there needs to be a way where the absence of a child’s development is somehow surfaced in the parent’s attention and in the parent’s mind because that human-to-human connection is so much more important than a human to a screen. We gotta find a way to do that.


[38:00] Mitch Prior: That’s a formidable challenge. And it’s one that’s on my mind a lot with a young family and my presence and time spent in tech because I often see confliction internally in what I’m doing and trying to make sure I’m riding the fine line of the balance of all these things. There’s pros and cons to all of it. How I spend time in front of the computer to get an income for my family—clearly we need that—but if I’m too focused on that and never spend time with the kids and my wife, what’s it all worth? And then when we get to making sure that we’re doing the right things and making sure that the next generation is prepared for whatever eventualities are coming up… I mean, the technical revolution that we’ve had in the last 10, 20, 30 years has been challenging enough as far as development and social norms and social media effects on the brain and the internet. The technology has changed us physiologically and psychologically, and now we’re starting to realize what those effects are, a generation ago. And it’s not like they were completely blind; some people saw them coming. I just hope that enough people become aware of these to get ready for the next wave because the next wave is coming a lot faster and across so many domains that we don’t even know of yet and probably don’t even exist yet. So I don’t know how to answer these questions that I don’t have even the questions for yet. But I can say another heuristic: just making sure we know how to think, making sure that we’re spending as much human time with each other face-to-face and presence as we can.


[40:02] David Redekop: We have a population today that hardly understands how the financial system works as it’s in the middle of getting revamped, as it’s in the middle of experiencing a massive amount of change. And I’m no expert on this, but I feel like Ray Dalio… the way he talks about the different cycles and how we’re now in stage number five of a cycle that regularly repeats throughout human history. And he describes himself as a mechanic that is just trying to understand how things are working and what needs to be fixed today for it to work tomorrow. And some days, that’s how I feel about internet and technology: I’m just a mechanic observing how things are working today, what I need to do to protect our businesses, our families, and those that we love for tomorrow. And you know, we’re all limited in what we’re able to do. You do as much preventative maintenance as you can.


[41:13] Mitch Prior: Yes, exactly. Yeah.


[41:15] David Redekop: Mitch, it’s been a real joy chatting with you. You and I do this on occasion when we’re not on camera. And do you have any remaining ounce of wisdom that you wish to impart on anybody that’s listening or watching?


[41:31] Mitch Prior: Keep your options open.


[41:33] David Redekop: I like that. Thanks a lot for the chat. It was good to chat. We’ll see you again, Mitch.


[41:40] Mitch Prior: See you soon. Bye.


[41:43] Announcer: The Defender’s Law requires more than a conversation. It takes action, research, and collective wisdom. If today’s episode resonated with you, we’d love to hear your insights. Join the conversation and help us shape the future. We will be back with more stories, strategies, and real-world solutions that are making a difference for everyone. In the meantime, be sure to subscribe, rate, write a review, and share it with someone you think would benefit from it too. Thanks for listening, and we’ll see you on the next episode.


1 post - 1 participant


Read full topic


The post TDL 018 | How To Think, Not What To Think | Mitch Prior appeared first on Security Boulevard.



Carly_Engelbrecht

Source: Security Boulevard
Source Link: https://securityboulevard.com/2026/03/tdl-018-how-to-think-not-what-to-think-mitch-prior/


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Blue Team (CND)



Copyright 2012 through 2026 - National Cyber Warfare Foundation - All rights reserved worldwide.