Can AI Agents Actually Hack Systems? | Legitimate Cybersecurity Podcast

Written by
Jasson Casey
Published on
April 20, 2026
Table of contents

TL;DR

- Pitch AI agent governance as revenue acceleration, not defense. CISOs unlock budget by unpausing stalled AI projects and speeding R&D, not by asking for more security spend.

- Split every agent workflow into deterministic vs probabilistic nodes. Let the LLM design the flow, then have it extrude deterministic steps as scripts so the same input always produces the same output.

- Use AI on two axes: build what you would have bought, and operate what you could not have staffed. Think tier one and tier two analyst work, incident triage, and custom plugins that replace a purchased tool.

- Prompt injection arrives from many surfaces. It enters through tool results, hook executions, MCP calls, and binaries embedded in skills, not just the user prompt.

- Run Claude Code through a trust enforcement layer like Ceros. Bind the agent to user and device identity, route every tool call through policy, and inject approved skills and MCP configs automatically.

Transcript

So if you've watched the news lately, something that you're gonna find out is everyone's freaking out about AI again. And they're freaking about AI in a lot of different ways per usual, but now they're mostly concerned about this big scary AI named Mythos that will come for your data in the middle of the night and attack you and nobody can stand against it. And we're now living in the age of Ultron depending who you ask. That's what we're gonna discuss today and a couple other fun things here on legitimate cybersecurity.

Thanks for joining. As always, my name is Frank Downs, and then with me is Dustin Brewer. We have a very, very rare occasion. We actually were able to make someone enjoy themselves so much, they came back to the show.

Welcome back, Jasson Casey. For those of you who are unfamiliar, a PhD and the current CEO of Beyond Identity with an incredible pedigree and background of all things cyber. It's great to have you. Thank you so much.

No worries. Thanks for having me.

So, Jasson, what we wanna talk about today when we were started off with is get your take on the whole Mythos discussion. For listeners or viewers who are not familiar, first of all, thank God that you have not been bombarded. You've not been in the torrent of of news, the hell that we've been in.

Claude, Anthropic, using Claude came with came up with a new version of AI called Mythos that they're using that specifically is able to, as they describe it, hack anything. Look at all the different code that's available and they'll analyze it'll analyze the code and come up with several zero days rapidly that didn't exist before, thus creating a new capability for cyber attacks and everyone's freaking out because it's taking a process that would take a human a rather long time and basically condensing it. And the example that they like to use that I've read most in the the in the news and that we've seen is that they put it into basically the equivalent of a box and said, you stay in there and closed all the doors, connected it off from the network and it still was able to hack its way out.

And so as a result, everyone is kind of doing the five alarm fire.

Jasson, is this really as scary as everyone says it should be? And what are your thoughts as to the implications of what we're seeing with this?

Oh, let's see. Many layers. Definitely gonna have to go on you on this one. So let's start with the the external cynic.

That's a fun place to start. I'm on board. Let's go.

When like, let let's look at look at how a lot of these companies are treated, which ultimately has to do with, multiples on revenue for market cap, whether it's a public or private company valuation. Over the last twelve months, the stock market, rightly or wrongly, has decided that if you're an AI company, the sky's the limit, Rocket ships to the moon. You're gonna get your forty x plus multiple on revenue. Or heck, you don't even need revenue.

You're pre revenue. You're a pure play. Right? So, like, we're we're gonna go full on.

And, unfortunately, that's a deep cut too. That also reminds me of my age.

I love that though that you brought that up first because did you see what is it? Allbirds, that shoe brand? Yeah. Suddenly said, we're done. No more shoes. We do AI now.

Dude, Russ Hanneman is making a lot of money right now as a consultant helping people.

And all I could think of was the terrible dad joke where I'm like, shoes to AI. I guess you could say that's a pivot none of us saw. So it it's just it's insane. But, yes, sorry. Continue. Honestly, you know what? Legitimate cybersecurity is now an AI company.

Investors, there we go. There we go. Tell all your friends.

Get money.

But sorry. Go ahead, Jasson.

Continue. So so the markets have decided that if you're an AI company, you're gonna get you're gonna get an incredible multiple. And if you're not, we're gonna take you down a notch.

So all of these traditional tech blue chips means something. So I probably shouldn't say blue chips, but like your tech stalwarts, right? Palo Alto, CrowdStrike, Zscaler, etcetera. They're now forced with how do I explain to the market that either I'm still relevant and AI doesn't destroy me and or, no, that's too hard.

I am an AI company too. Right? And I look at, so, so so I start with that and I realize like everyone has this motivation to kind of paint themselves as an AI company. You can kind of see the results.

Right? At RSA, everyone is doing AI security. Everyone is shadow AI discovery. Like, what the hell?

What's the difference now?

So that's kind of one pressure on the market. The next pressure is you've got Anthropic. Like, in all of this, I actually just see genius marketing on Anthropic's case. But, like, you've got Anthropic who's clearly coming from behind, but with strength, trying to get in front of OpenAI, realizing that like enterprise is really where they can drive material revenue quickly and almost pull off an Amazon style play where, yeah, they're gonna spend a **** ton of money, but they're gonna do it in a way to build really, really sticky revenue with large enterprise, with, with mid market companies and with startups that have a future of being someone big.

And so now they have this new model. Right? And by the way, they're gonna have a new model every six months.

Yeah. If they're doing their job right, this new model is always going to be more capable. This new model is always going to open up an adjacency. Right?

And this new model, we can we can look at it with two lenses. We can we can be skeptical and we can say at the worst case or at the best case, it's really, really helpful in terms of reverse engineering. It's really, really helpful in terms of vulnerability scanning. It's really, really helpful in terms of code scanning.

Right?

See, Dustin. Sorry. Yes. Go. Go.

Can take the worst case view, which is it's nineteen ninety seven.

I forget the exact date. What is it? August seventh nineteen ninety seven. The Terminator account.

Oh, yeah.

And, and, you know, Skynet is now alive and it's coming for us.

Of course.

Clearly, I don't think that's actually true. I think it's a really, really useful tool.

I think Anthropic is playing a brilliant strategy where they realize that like, Hey, all these tech blue chips have to rebrand themselves. I could loop their marketing teams and their marketing budgets behind my effort because I'm actually showing leadership in this right now. And I can make it exclusive, which make is gonna make everyone start to feel like it's FOMO and then just start pumping out news story after news story of how like the world's gonna end tomorrow if you don't jump on board.

And like when you dig in, and so here's where I might be a little over my skis. I'm mostly relying on other people.

But a couple of security researchers I follow on Twitter, you know, they looked at a lot of these claims and on surface they're like, yeah, they're making discoveries that have existed for a while, but they're not exactly exploitable. And or they're not they're the immediate impact is not as devastating as the headline of the article might make you feel. Right? So on one hand, I think that's a win for marketing.

Right? Like, they're doing their job. They're getting the attention. They're pushing and dropping forward, which is hats off to them.

But is it going to cybersecurity as we know it tomorrow? Absolutely not. Maybe for the people who aren't paying attention.

I do wanna know, like, I think, you know, another thing to add, and I want your opinion on this as well.

You know, for me, when I read, like, what this thing does, I'm like, oh, great. This is gonna, like, emphasize SAST, s a s t, a little bit a little bit more.

And maybe companies will actually start doing it or at least doing it correctly. Do you think that these types of marketing campaigns, and I think all three of us kind of are on the same page with this, it's a marketing campaign, right? So this type of marketing, we've seen it in the past with exploits, right? So Meltdown and Spectre, for example, Heartbleed, they gave them these cool names. They gave them these kind of marketing little meltdown Spectre was like a little ghost guy and meltdown was something that was dripping.

And that that caught the attention of CISOs and CEOs and everything. They're like, we need to watch out for this kind of stuff. Do you think this is kind of having the same effect now too where it's like, you know, you're gonna have CEOs and CISOs who are actually gonna start caring about SaaS a little bit more and and maybe start implementing those things?

So kinda going away from the, it's gonna take our jobs to, no, actually, it's gonna create more jobs because or it's gonna solidify, if nothing else, maybe not create more jobs, but it's gonna solidify procedures and processes and all that kind of stuff it comes to That's a hard doing true security.

That's a hard one. So like for a CISO to really change priorities, they need to get money. Right? Okay. And and and right now, money and budgets are focused on AI productivity pushes.

They're not focused on rear any sort of rear guard defensive action. And I think that's going be a tough battle for them, even with everything that's actually going on. I think they stand a much better chance saying something along the lines of like, hey, we are, we're getting left behind because our big AI, native projects have been paused because of governance. I now have a way of unpausing it with this new mindset, this new process, this new architecture. Or, Hey, we've been blindly accepting all of this risk because we can't afford to be left in the dark. I now have a way for us to establish governance without slowing down, R and D.

I think, I think those are viable pathways to budget for the CISO. And so if they can turn that into action that they may have not been able to do in kind of just traditional eat healthy and exercise activities of the, of the cyber operation, then yeah. I don't think it's one for one though. I don't, I don't think mythos is going to create a budget for non AI related like AI defense AI defense is going to be hard. Think still.

Yeah. Yeah. Yeah. Hundred percent.

Another part that I thought was a little and Dustin and I discussed this a bit. I thought it was interesting. And I wanna phrase this carefully because we have a lot of friends in this industry. You see some of the informative information, you know, that's put out there by leaders in cybersecurity that we all know and love.

And it, the thing that was interesting for me in this instance was it was a lot of gloom and doom, but not, but it's still one of the, like, we can see it for what it is here. Right? It's something that, you know, okay. Yes.

AI was eventually going to be used like this and we knew it. We've already, we've actually already seen it. Right? There's a lot of pen testing companies that leverage AI to help enhance their pen tests, for example.

A hundred percent.

And now we're, you know, Anthropic mentions this and you have huge names in cybersecurity. We're like, this is this is bad. Everyone, we need to get on the wagon. Almost at like state and national level who I met. I know. And I know that I know they know. I know they know.

What you see a lot of different things in a lot of different rooms that, Dustin and I glimpse from time to time. Why are they chicken littling so hard over this one?

Like Who was it it was like an advisor to Bill Clinton.

It's like waste no crisis or let no crisis go to waste. And, the idea behind the quote was every time people get worked up, I'm going to use that energy to solve a real problem, through some sort of mental jujitsu. And so like, look, I can't read the, I can't read these people's minds. I don't know actually what's in their head, but if I were in their position, I would absolutely be trying to use the the energy of the moment to advance the cause.

Okay. So like one of those take it, take every opportunity you get to get things pushed forward no matter what the hype around is kind of thing.

Yeah. Like, like the, I think the, this is incredibly important, the larger the organization that you're responsible for leading. Right? Or the larger the organization you have to operate within, and especially in terms of policy and like government.

What you wanna do is never going to be the objective objection or the principle of, of the other party or of the larger group. Right. It's kind of on you to figure out how do you take advantage of what's going on in the moment? How do you actually advance some part of your plan based on what's going on in the moment?

That's helpful to, right? The person on the other side, the one who's actually writing the budget, but also kind of connects to the practical needs that, you know, actually has to be advanced. So like if I were one of these people, and I would absolutely try and take advantage of the moment right now for what is the most efficient in my cyber program.

And I would try and lean in on something that checks the Okay.

Right? To give that policymaker something to actually talk about. But look, I maybe this is maybe this is what happens once you've spent twenty, thirty years doing this. But do you really wanna fight the emotional energy of tens of thousands of people? Or is there some way to take advantage of some amount of that that already exists? Right?

That's a great perspective on it that I didn't think of. But that, you know, trying to redirect it and use it in order to improve things rather than because I'm I'm I'm like reading some of these research papers. I'm like, I've met you.

You you're smarter than this. I do not understand. I'm like, So that that's actually that's encouraging. Where do you see I like the idea of using AI to improve cybersecurity overall.

And I found that it is often a reactionary. So, you know, it is a reactionary. No matter how often you try to be proactive and having good control so no incident ever happens, it's the incidents that happen that end up getting things done. Where do you see this evolving to?

Do you see this kind of like I've watched over the last year as and Google did it better first in my opinion, where Google thoughtfully integrated AI into its services to help with emails and and documents and all that. And I'm seeing Microsoft slowly, in spite of themselves, to get better at it, which is good. Like, I feel like they're gonna take the long way every time sometimes. As you what have you seen as far as AI integration?

We talked about some of the tools. Do you think I am specifically interested in your thoughts on blue teaming. Because, you know, it's always fun to talk about red team. You know, I'm gonna use this like, with Mythos.

Everyone likes to talk about Mythos and how it's gonna be used to get you. How do you think this is gonna be used for for blue teaming and securing things beforehand?

I've I've actually got a couple friends running small companies now.

Wizbee is actually one of them where they're doing this exact sort of thing. So they're taking they're taking a play on a combination of, like, first party and third party vulnerability management and and response and trying to see how can they how can they actually use intelligent AI tooling to do the equivalent of like tier one, tier two analyst work. Like, is that real? What's the mitigation strategy?

Should I actually bump it up? Like, I wanna say I have at least two two friends that are kind of in that, in that area. A guy named Stoyan Stoyanovich and Chris Cochran. They're they're they're solving slightly different problems, but they're both kind of running at that problem.

We use it on our side as well in two different ways. So like the way we like to think about AI is, well, we think of it a couple of different ways. So number one, I can use it to build things, that traditionally I would have bought. Number two, I can use it to operate things that traditionally I wouldn't have had the staff to operate or I would have had to hire a large staff to actually operate.

So those are like AI operations is more about like, think of it as like, custom skills and hooks, and plugins for cloud codes specifically around codifying these kind of workflows that that that you are very comfortable with. Right? When you're running a incident response, for instance. AI build, we've also done, and I actually, our security team has done this in terms of like really up leveling incidents triage and filtering and an automated response.

The trick there in where they're exercising a lot of their judgment is trying to break the problem down into probabilistic versus deterministic problem sets. So for instance, if a problem can be solved deterministically, you're really using the AI system to generate code because that code will deterministically operate every time and always give you the same answer for the same input. Right? Same inputs equals same outputs.

Right? Like I'm sure you've all experienced this. When you, when you sit down with Claude and you say, Hey, Claude, I want you to go do this thing and look at all these files and whatnot. Claude gets lazy because it's been trained over our how how we behave.

Right?

And we're lazy, which is why, you know, code It's not just Claude either.

I've had that same problem with ChatGPT, or I'll say, that actually does exist in the file. And they go, you're right, but I didn't read it.

Yeah. It's it's it's it's a byproduct of how these models are trained and reinforced. And so what the models are good at is you can actually coach the models to recognize what what should the process be.

But then rather than having the model run the process, you have the model essentially extrude the process as a script.

And then you just have the model out. So that's what I mean by like determinism versus probabilistic. So I know there's a little bit of a divergence from your question, but go with me for a minute.

Yeah. Keep going.

Imagine a control flow graph, right? Like of a program in your mind. And it's not even a real program so far. It's really just like this interactive workflow that you've kind of come up with with Quad or Gemini.

It doesn't really matter. Right? Pick your favorite LR. And so at each of these nodes, work is getting done.

And if you're doing it all native in in an LLM, each one of those nodes, we're gonna just call them probabilistic.

But the problem that that node might be working on isn't necessarily a probabilistic problem. Right? All deterministic problems can be worked on probabilistically, but not all probabilistic problems can be worked on. Right? It kinda goes in one direction. Yeah.

For instance, apply this function to every file in this directory. It's a perfect example of a deterministic problem. I don't need to call out to a probabilistic parrot to to try and solve that problem. I want an enumerative loop over everything that's in that system.

So that's a perfect example of a node that originally was probabilistic that I was kinda working through with Claude. How do I want this workflow to go? But now I can convert this node from, like, red to green, where red is probabilistic and green is it now extrudes it as a script. And rather than Claude trying to figure it out on the fly, it just always calls the script.

So now imagine you work through that all control flow graph using that kind of mindset, like what task is too big and I want to split it from one node into maybe a couple of notes. Right? So again, kind of classic control flow analysis type of thinking. And then, because skills can call skills, you don't need to wrap it all up in one skill.

Right? We just know good modular thinking is make things small and atomic. And then once it's as small as it can be, what's deterministic versus what's probabilistic. Anyway, where I was going with all of this is when you start going AI native and you start saying, all right, what can I build that I would have traditionally bought?

What can I operate that I would have traditionally not had the staff for? This is the mental tool that you're actually going through in both the build as well as the operation. And, and so I think that's really the frontier of, of what changing your business architecture looks like in terms of like truly going AI native. And we are pushing really, really **** ** that front because we think that will create new surface area of risk.

We think that will create new behaviors or new opportunities to kind of catch things early, right before they actually go into production and whatnot. Yeah.

At least that's how we think about trying to get out of the curve.

It I like that a lot because I I feel if once you can determine which is deterministic versus probabilistic, you have you can very much decrease I'm speculating here, but that'll decrease your error rate too, far as output.

One hundred percent.

Because it's just like, okay, this run this, run this, run this. It's less, less compute and less room for error when it gets to the probabilistic element of say potentially that flow. And I think the temptation for a lot of people here with AI is using it, viewing it as kind of magic where if I just write the perfect prompt, I won't have to do anything else. Right.

And it's I I think that's a really good it gets back to there was a guest on, I think it was the Ezra Klein show a few weeks back. He was one of the who was leading AI development at one of the research universities, I think. And he mentioned he's changed now the way he interacts when he for example, to build a tool, when he's using it to build a tool, instead of trying to describe the tool right and and then modify things here and there. First thing that we'll do is have AI interview him to gather software requirements to build a rec sheet, right, that he can then provide go from there.

So that's that's pretty awesome. I like that a lot. Thank you.

I do I do wonder though, like, it does seem like for blue team specific like, specifically blue team scenarios where you might have something shift from, you know, that dynamic prompt or that dynamic, I don't wanna say condition set, but something along those lines into something that's more of a script. Right? So let's speak like, you know, snort rule, right? So like, you know, your AI agent is on a system.

It notices a pattern of something happening. Maybe it's during a Red Sea exercise and, you know, your Splunk didn't catch it or whatever didn't know, I think I said snort. So you probably just follow along with that. And Elastic Stack or whatever it is that's alerting on the back end of that didn't sense it, right?

And so your AI says, Okay, well, I'm gonna write a rule for this then. And so your AI goes, writes the rule. Now it's codified. Now it's in the system.

However, the next time the penetration test happens, they use something similar, but they do it in a different way.

And so, do we run the of if we are going to Do we run the risk, I guess, it just gonna write another rule then?

Is it worth having something that's there that's going to be changing along with the attack, if I may.

So what you're describing is actually just the classic problem of specificity versus generalization. Right? Like does the model generalize? Does the rule generalize?

And so I don't think AI This is one of those things where the problem still persists regardless of how we decide to solve the problem. We just have to be aware of it. Right? So I would argue if I'm dropping a rule in, whether it's Snort or BP, regardless of whatever system, if I'm dropping some sort of conditional scenario and I'm classifying it such to where it is so precise, it's only ever gonna pick out one instance of the universe of attack types.

Maybe it's useful for like regression testing, but it's kinda hard for me to think that's it's certainly not generalizable. Right? And that's where I would say maybe you're using AI wrong if that's how we're using it to generate dynamic rules. I would think of it more of like, I would wanna use it to generate more of a patterns of behavior.

Right? So like, is there a certain pattern of reconnaissance that occurs before like, you know, the initial the initial detonation or initial access and like, can we elevate that path? Can we detect that pattern quickly and codify that pattern? I don't know.

I would think about it more like that. Actually, you yeah. I would I would think about it like that. You you just kinda gave me an idea.

So are you guys familiar with, the whole AI factory, software factory trends that are going on right now? Gastown, that sort of thing? No.

Okay.

So number one, definitely worth your read. It's called Gastown. It was written, I think, in January. But the speed of this this world has there's a variation of it that's come up pretty much every month since then.

The idea behind Gastown is kinda simple, though. It's look. When you first start using AI, you're using it like as a as a glorified search engine. And then you start that's like your your step one.

Then your step two is you start using it as a tutor on a subject. Right? Then your step three is you're like, oh, this can script and this can code, but you're like copying and pasting, and it's it's really awkward. Right?

And then the next phase is you kinda graduate to, alright. I'm gonna I'm gonna go to a terminal experience. I'm gonna actually give it access to real local resources within some sort of boundary or not. Right?

Some some of us yellow. And, and and you start to realize that asking it to write code is not great because it gets lazy. It removes test cases to make it pass. Like, it does all sorts of wily things because, again, it's been trained on Stack Overflow.

And you and I both know, ******** is on there. Right? So sorry.

Ghost of Stack Overflow will forever live on. Like, everyone's talking about the decline of it. No. It's still in there. It's just infected everything now.

Yeah. Yeah. Stack Overflow the the old crotchety long beard is never gonna die now.

Yeah. Never.

But but yeah. So, like, you you quickly realize you need to do what that professor was telling you about earlier. You need to actually do what's called spectrum in development where you you essentially write an interface spec, an acceptance criteria. And it gets a little bit more involved than that, but like, that's kinda your phase run.

And then you have it write code, write code, write code till it's passing. And then you look at it and you realize, alright, it passes, but do I wanna maintain this? And then you kinda go to your next step, which is now I'm gonna have rules on software composition that it has to pass as well. Anyway, you go through all of these tasks and eventually you realize you're orchestrating all of these things.

Wait a minute. Could I somehow lift myself out and have an AI orchestrator? Right. That's the concept behind these factories.

So what they'll basically do is they'll set up this factory where you've got a QA a QA agent and a spec writing agent and a orchestrator agent and a software architecture agent, and then just software developer runners and CI runners. And and and it's basically this **** of token spend. Super chaotic. But in the end, you get something that kinda works and it's kind of amazing.

And it's it's it's, it's not, it's not for anyone. You shouldn't just jump into it. You should kind of go through that progression that I just described. But the reason you made me think about the software factory is I wonder if there's a blue team factory.

Right? I wonder if there's a blue team factory where I'm not exactly writing software per se the way these blue team guys are. What I'm actually building is I'm building both kind of a static and a dynamic layer of detection. Detection. And I'm building a capability to do kind of proactive forensics and investigation and like kind of reconnaissance.

Yeah.

And I'm building it through a layer of orchestration.

Cause ultimately, you know, you said something earlier that took me off as well, that you I that was what I deep side on, which is like, this is not magic. Like, these systems still obey the laws of math and computation. And the more we actually understand they work, the more you can actually harness it faster.

Anyway, that that's what these these software factory teams, like, what they've arisen out of is they've kind of they've really paid attention to how these agents work, what makes them work well, what makes them just spend your money and still not arrive at the conclusion, and they've come up with these better ways of operating. I haven't seen anyone turn that on a blue team or a red team concept yet. I'm sure they are. Right?

Coming soon beyond identity, though. There you go.

There you go.

No.

We're we're we're using we're using factories. I don't know. I shouldn't say that. Maybe my security team is doing this sort of thing.

Because we're certainly using factories in how we actually do development. We're using it in how we do marketing. We're using it in why we do sales. That's probably a whole another conversation, but like, yeah, we've, we've completely reshifted our, our organizational structure as well as our business processes to be more like we're AI native.

We don't need a four hundred thousand dollars marketing stack. We literally need an S three bucket, a static page as a data sheet. And then I've got a series of cloud plugins that basically track user journeys through the platform, track sign ups, all that sort of thing. Mass mailers through like, what is it?

SNS and AWS. Like no need for HubSpot. No need for no need for HubSpot.

No need for I'm gonna be honest.

You don't miss HubSpot.

That's system, all that stuff.

Yeah. For CMS. Multiple people I've noticed, CMS is radically changing radically changing, due to this. And for all of our listeners, here's terrible advice.

For each node that he just discussed in in cell process, you need to buy one Apple specific Mac mini and put Clawbot on it and just have a big stack at your desk. I'm kidding. I just feel like that's that's the current trend now, especially with Clawbot. They're like, oh, I need to buy a whole computer just for it.

I'm like, do you do you?

You know? And the worst part is when they do it and then it's just then they're just all callouts to other AIs. And I can't get behind it, personally.

Look. If that's your hobby, then knock yourself out.

Well, yeah. If it's a hobby, I'm not gonna knock anyone's hobbies, but my gosh. I just I am very concerned as a day to day as I I see a lot of people in the professional field who get what is it called? Dunning Kruger?

Right? Where they they're like, it was staring us in the face the whole time. I just need eighteen Mac minis. Guys, I'm a pro now.

And and it's like that you're at this part of the Dunning Kruger effect. Get ready.

Yeah.

So it's been really interesting.

Like these large models are trained like, so like one compute node is almost a hundred thousand dollars. Right? Yeah. And these large models and even if you had the money, it's not easy to buy one.

Correct. And these large models are trained on fleets of hundreds of them and thousands of them. Like, you're not gonna compete. Like, don't even kid yourself.

The I think it was Business Insider said that like companies are now buying so much from Nvidia so far out in advance because they don't wanna get put to the back of the line. That even if they're not using what they're buying, they keep buying more anyway. Yeah.

Well, memory is already like Ram has already just like skyrocketed. Right? So now they're starting to like come out with consumer grade, you know, laptops that are notebook computers and regular computers that are ten like, we're going back to the nineties here. We're gonna have like, you know, two two gigs of RAM or something like that.

The new MacBook is run on a cell phone processor.

So the this laptop sitting on that chair in the background, that was my version of that escapade. So I I I do actually fine tune small models.

Awesome. So for instance, like voice synthesis models.

Also, like, there are yeah. I'm sure you can guess what for what for. We have a small red team plugin. Large language models are not everything.

Like, there's all sorts of deep neural networks that are highly highly and the whole world of scientific computing is actually using these things that are either graph constraint models or geometric constraint models. Like, that's what they're that's how they're actually solving like protein folding. That's how they're doing the AlphaGo stuff. That's how they're solving like advanced partial differential equations much, much more optimally than like with supercomputers.

But these are billion parameter, five billion parameter models max. Right? They're not trillion parameter models. So these are within the realm of mortals.

They can actually be run on laptops. And the other thing too, like if you just max out a MacBook Pro, it it'll do like, it'll literally do everything you could possibly want with with with with essentially what is in the range of your budget and actual skill set. Exactly. Yeah.

Yeah.

Yeah. And I'd even say that even some of the, you know, the minis and what is it? The what is it? The Mac Pro four the m four processor stuff can still do a lot for people. So like they don't even have to go they don't even have to go full spec.

And like I said, this is like, I'm, I'm doing fine tuning on like low, low, low parameter count, like right at a billion models. Like this is not, most people aren't like, you don't need this to run them. You don't even need this to run small models with. Yeah.

So as we're considering all these different ways in which AI does blend into, security, I know that you guys have an interest in security and so forth. Is there anything that, you know, Yana Identity is working with or working on that you'd wanna share with us?

Yep. So we have just introduced a new product called Ceros. It is fresh out of the oven and makes smells of warm baked bread.

Well, that it's already my favorite AI.

Great. So basically, do you run Cloud Code? And, are you worried about governance? If the answer is yes to those two things, you should try out Ceros.

Wow.

You can sign up for free on beyondidentity dot ai. You can also go to ceros. Sh. We have a free tier for the offering for like your personal use or just your exploration.

And what Ceros is, is it is a trust enforcement layer for your agent. And we say Cloud Code right now because that's where we're starting. That's where the bulk of most of our customers actually are. It will expand over time. But right now, it's really, really good for Cloud Code. So the way you should think about it, think about Ceros as it's kinda almost like a virtual machine.

It will execute Cloud Code for you. It will establish an identity for that executing agent, linked to the identity of Cloud Code, links to your personal identity, linked to your device's identity. It will figure out the device posture of the device you're actually running it on. It will run all of that through a policy engine.

So from a corporate perspective, what this allows a company to do is to understand and track all AI usage, right? Have locked down identities for all AI usage. That agent is operating inside of a context window, or I'm sorry, in inside of an authorization context where we can exactly control and observe all the tools it uses. So MCPs are tools, but so are building tools.

So are backup calls. So is LSP. Prompt injection can certainly come from the user, but it can also come from tool the results of tool calls. It can also come from the result of a hook execution.

It can come from the result of a binary that's embedded in a skill.

It can come from So does it sanitize all those before it even gets to the clock then?

So we basically surface all of that so you can see it all. And it's like a typical security tool, right? The first step you wanna do is just wanna turn it on and see what's going on in your The next thing that you then do is you say, hey, some things you may want to restrict, some things you may want to prohibit, some things you may want to alarm on. But we go a little bit further than that. We also let you inject things. So for instance, if you have a bunch of engineers, you want them to be productive.

You want them to have the standard Claude. Md setup. You want them to have a set of permissions that you have blessed. You want them to have a set of skills that you know will accelerate them on the job.

You want them to have a set of MCP configurations and local tool setups that you know will accelerate them relative to your product and your company and your mission. Your developers don't have to set any of that up. We can just naturally inject it. And I know there are other ways of doing some of the things that we just described.

What we've seen in the market is none of them are easy. And so we focused on making it super, super easy.

None of that. Yeah. None of them are easy. Working with our clients day to day, I can tell you there is a lot. It's a nightmare. It's a nightmare for them.

Oh, yeah. It's like a two step install. You immediately get visibility into what's going on on your machine. Because of how we do what we do, we give you the ShadowAI discovery thing right out of the gate.

And that works for all agents, all LLM providers, all MCP services. And, and we even do model discovery and model detection. The deep security analysis in terms of, like skill skills related things, plugin related things, hooks related things, that's very specific to Claude code today. Awesome.

That's fantastic. Will it protect how will it will it protect me from my vacuum?

The reason I asked is did you see that vacuum? Well, d there is a there was a hack, I wanna say two weeks ago, from DJI. First of all, I didn't know DJI made vacuums. News to me. I thought they just made other drones.

How else will the PLA know what's going on in your environment?

You joke. But a Frenchman knows what's going on in our environment because he did he did was looking at the API, and it was not fully secured. So he was able to control over seven hundred thousand vacuums. And he realized that using API calls, that wasn't secure. He was able to then use a PlayStation controller to walk his around.

It can also provide access to all the scans of your house that was there. So if someone wanted to take everything, they could, to figure out what all our houses looked like. Could use some of them some once it was discovered that some of them were compromised, there was people knew it could be done. Multiple units just going activating in the middle of the night, going in the middle of the floor, and started playing creepy nineteen fifties like BioShock music. And if there's anything I don't wanna wake up to well, there's a lot of things I don't wanna wake up to, but one of them that is squarely in that list is my vacuum, like, doing its best BioShock impression at me in the middle of the night.

So I have a hard time believing you have a DGI product installed in your house.

I do not. I do not. But I mean I mean, Jasson, we're all compromised anyway. I'm sure something's in here.

That's true. That's true. I've an I've got a Raspberry Pi one still that I'm using it for my pie hole. I guarantee you that someone's got that figured at this point.

So.

You're you're probably right.

You're probably And with the OPM hack, we're all it does not actually, they don't care about us anymore.

They they've already got everything.

So You got everything they need.

That's there. It's it is it it's interesting. I I do think that you I'm excited to hear that you guys are building this though, specifically because I've met more than one client that they're almost afraid of AI. Right?

Because they're afraid of what data will accidentally go into it. First, they're afraid they won't use it right and that they won't be able to control it right. They won't be able to sanitize it. And the concern I have is then sometimes people are, are, you know, more interested in never using it, never adopting it.

And that's like saying, here's the internet and not making a website back in the nineties. Right? Where it's we were all going that anyway. So having those protective capabilities.

So I'm excited. Also, congrats. That's a great idea. So it's exciting. Yeah.

I know it's it's super cool. And I can geek out in the weeds, but at at a high level, that's exactly why we built Ceros. How do we help people run towards the future safely? Yeah. Yeah. That's great.

And how and how do people claw towards the future safely too when their fingernails are now hacked? Something else I wanna get you guys' opinion on. There was last week, I believe. It's a company called Eye Polish. Huge hit at CES. They have because it's ever changing nail polish that they advertise it as a nail polish that you can use your phone to change the color on. Now, I'm like, I don't believe that.

Taking a look into it, what it actually is a bunch of little e ink screens that people can put on their nails, which I'm now realizing people are gonna end up having to charge their nails at some point. Like, I know e ink lasts a while, but I don't wanna charge my But they're like every Exactly. Yeah. Yeah.

What they identified though, so it was identified is the app was leaking personal information all over the place, and people were able to track women off of their fingernails that were just leaking data. From your perspective, is the dimension we're living in now that this is a thing? And two, can you guys explain to me a little bit your thoughts on the security concerns there? Nothing but the best topics here, Jasson.

Nothing but the best. This is legitimate cybersecurity.

This so number one, feels like it like something out of a William Gibson novel, doesn't it? Right? Like because he's so focused on aesthetics and and design and clothing and fashion and whatnot.

Also feels predictive and like it could have been in a novel because he also talked about the exploitation of all of this sort of data around, like, profiling and tracking of people and whatnot. I don't know. That I guess maybe this is the child in me, but the immediate thought that comes into my mind is I would love to use it to just send messages to people.

All of a sudden, your nails are saying, hi there or something. Yeah.

Like that that I guess that's the ten year old in me. That's that's just me. Yeah.

The so many thoughts. So many thoughts.

I mean, wearables have always been like, that's that's always been a concern. Right? Like, when you any kind of wearable, doesn't matter. It is sure. This is a little bit yeah. Right.

I don't how to add mine on right now, but because I think it died. But, like, that's always kind of been the concern is that, like, there's gonna have there's if there's gonna be connectivity and there's gonna be RF, then there's gonna be, you know, whatever. If your your body is going to be radiating signals, then those signals are always going to be you're always going to be able to be profiled. Phones, they somewhat solved it with.

Right? We have randomization of Mac addresses, but even then there still are these utilities out there that can basically profile your phone even if the Mac address is randomized just based off of habits. Right? So I can start building these habitual profiles off of people, these heuristics and temporal behavioral patterns.

And then you can say like So for example, when you're doing a pen test and maybe you're going after a CEO, you know, can't necessarily figure out what the MAC address of the CEO's iPhone is, but you can figure out when that person leaves every day. Can finally get home and then finally, you know, track them down to their, you know, WiFi in their house, right? Not that I'm giving anybody ideas here, but, you know.

Yeah, please don't do that. We don't recommend it. Please don't.

I said for penetration testing, this is for legitimate cyber security.

Yeah, there we go.

What's the name of the old tool that is basically, you had to be within like ten or twenty yards of a target, but you would point it at their monitor and it would read the ambient RF and reconstruct the image. It had a cool name.

Can't remember, but I know what you're talking about.

I know exactly what you're talking about.

And it's fairly inexpensive. Right? And I think it still works. So I think the tech like the like the technology still actually works because our yeah.

The yeah. Everything you're describing, it makes me think of that. It makes me think of I mean, heck, even the so like in the Ukraine war right now. Right?

Like, I'm pretty sure the Ukrainians and the Russians are getting really, really good at tracking and location just through ambient electromagnetic emission. Right? Right. When people turn their phones on, when people wear their watches.

And, you know, I don't know if it's true, but if you read the news, apparently, that you know, we found the we helped locate the pilot, the downed pilot in Iran based on, like, the electromagnetic emission of his heart.

Right. That was crazy. That was, we can find you anywhere in the world by your heartbeat.

That's I mean, clearly But still can't find that down in Malaysian airline, by the way.

I mean, so this brings up an even better point. Like, we don't even need to have wearables anymore. Like, people can just find us anywhere. So but, you know, we're obviously giving it up on a on a platter.

But I I still think that like, yeah, okay. The nail's cool, like, whatever. But like, everybody's got this. You know, you gotta have this to use the nails too.

And this is gonna be way louder than any other kind of wearable you have. And people don't even care about the security of that right now. So I don't know. That's that's kinda my take on it.

Like, you know, when I was at black Defcon in twenty seventeen at Biohacking Village and everybody's getting RFID chips, can get paid ten bucks off the RFID chip on you. I was like, next year, I'm gonna do it. I was totally ready to do it. Because I was like, who care?

Like, can they can get me any way they you know, again, OPM already. Right? Like, it's it's kinda done for me.

Our friend has that in his Yeah.

And and he's basically enrolled it in the Beyond Identity system.

Nice.

Actually do these aren't the droids you're looking for to authenticate.

There you go. That's great. That's that's exactly what I wanted to do with it. But then, of course, like, then I didn't get to go again until COVID hit and then, you know, COVID made fools of us all.

But always found that kind of stuff really kind of interesting too. And that's something that's always gonna follow you as well. But again, like nothing still, still nothing compares to these, you know, internet, the magic internet bricks that we have, that we all carry around, right? There's so much EMF and everything else that is, like, just pouring out of these things at any given time.

Yeah. No. Like, so you you you you make a really interesting point. There's really what what I guess I could almost think of it as two buckets. Right? So one is I'm just trying to figure out, you know, what's your pattern of life behavior, some sort of physical proximity. I can do that with both and with how modern society is going.

Like, number one, you've got some piece of electronic on you, right, that I'm absolutely gonna be able to track if I'm within a certain range. But number two, like every technology that we see in these whiz bang stories today, right, that gets rolled out by the agencies tends to be pretty common ten year ten years from this. Right? So is it really that crazy to predict? Sorry with the barking.

Is it really that crazy to predict that the trend the trend of it's out there and they're gonna know where you are is actually gonna continue?

The probably more are like, I don't know. How do you raise the noise floor?

How do you That that and I I like that idea of data pollution almost that you can do that in order to make it more difficult.

And I think that part of it's gonna be obfuscation moving forward. It's always gonna be it's gonna have to be.

But, anyway, we are coming up on time. You heard it here. Clip your own fingernails, throw out your phone, move to the middle of nowhere. Legitimate cybersecurity.

Jasson, as always, thank you so much for coming. We really appreciate it. It's always great to have you. It's been an awesome discussion.

If anyone is interested in what Jasson talked about, they can, go to Beyond Identity's website and take a look at that as well. But until next week, Dustin, take us out.

Keep on cyberin'.

Keep on cyberin' everyone.

Jasson Casey