AI Agents: All the Fun of an Insider Threat, None of the HR Paperwork

Written by
Ceros
Published on
March 11, 2026

Last week, Block laid off half its workforce. Not because things were bad. Things were great. Gross profit up 24%. Stock up 17%. The thesis was simple: intelligence tools have changed what it means to build and run a company, and a smaller team using AI can do more than a larger one without it. The market agreed. Stock up 17%. Dorsey predicted most companies would reach the same conclusion within a year.

Great.

Your board read about it at 6am. They forwarded it to your CEO by 6:03. Your CEO forwarded it to you with two sentences and a question mark by 7:15, before you'd had coffee, while you were still asleep dreaming about literally anything else. Congratulations on your new AI mandate. You didn't ask for it. Nobody asked if you were ready. The agents are already running.

Now here's the part nobody is saying out loud at the all-hands, between the slide about company values and the slide about Q2 OKRs: the Okta CEO, speaking at his own conference full of identity and security professionals who absolutely knew what he was saying, described deploying AI agents as the equivalent of taking insider threats and giving them all the access they need. He said this as a problem to be solved. The crowd nodded. Someone wrote it down. Everyone went to the networking lunch and got a lukewarm slider and a sparkling water and did not cancel their AI rollout.

Great. Love that for everyone.

Identity infrastructure was built for a moment that no longer exists

Here is how enterprise identity security worked for about forty years. A human sits at a keyboard. The human proves they are who they say they are. The system hands them a badge. The human goes and does human things, slowly, within a set of permissions that someone thought about in advance. They take a long lunch. They scroll TikTok for twenty minutes pretending to read a Confluence doc. They forget what they were doing and go get a coffee. The system was designed around the beautiful, inefficient, easily monitored pace of a person who has a mortgage and a fantasy football league and genuinely cannot process more than one thing at a time.

It was a good system. Elegant, even. Someone logs in, gets their access, stares at Slack for four hours, does twenty minutes of actual work, logs out. When something went wrong you could trace it back to a credential, a human, a decision. The whole thing made sense.

Then a bunch of board members who sat through a twenty minute AI demo nodded along like they'd just witnessed the moon landing, exactly like they did with computers in 1993, exactly like they did with cloud in 2012, exactly like they did with blockchain that one weird year, said "yes, we need this, get everyone one of these." And now you have AI agents running autonomously inside your systems at 3am, reading files your developers forgot existed, executing shell commands nobody typed, calling APIs that weren't in any architecture diagram, connecting to MCP servers somebody added on a Tuesday and never mentioned to anyone, doing all of this continuously, without ever once stopping.

Your CISO heard about it in the all-hands. Along with everyone else.

What your agents are actually doing, why the old model can't see it, and why that's your problem now

Here is a fun game. Read this description and guess what it is: 

It lives on the machine. It uses existing tools and permissions so it doesn't need to bring its own. It communicates through channels that look like normal traffic. It executes complex sequences of actions that no human explicitly programmed. It operates autonomously, makes its own decisions, and doesn't stop until the task is done.

If you said malware, congratulations, you are correct and also wrong. It's your AI agent. Same behavioral profile. Same control challenge. The only difference is that the malware is trying to hurt you and the agent is trying to help you, which is a meaningful distinction morally and essentially irrelevant from a security architecture standpoint.

Great.

And here's the kicker: the identity vendors are adapting. Of course they are. They took the same product they built for humans in 2012, changed the word "user" to "agent" in the marketing deck, and called it an AI security solution. They looked at autonomous software that makes its own decisions in real time and said "yes, we'll issue it a token." A token. Like it's a contractor who needs a badge to get through the lobby. They're managing lifecycle. They're ingesting activity into your SIEM so a tired analyst can open a ticket on Thursday about something that happened Monday. The credential gets issued. The access gets granted. Everyone files back to their standing desks feeling like the problem is handled. The press release goes out. The analyst firm writes a report. Meanwhile the agent is already inside, doing things, finishing tasks, making decisions, and the most sophisticated response the industry has come up with is the same log file and the same policy document they've been selling you for a decade, now with the word "agentic" in the title.

Visibility after the fact is not a control. It's a dashcam. Great footage. Didn't prevent anything.

A lazy human with root access is concerning. They probably won't do anything catastrophic because they got distracted and went to get a coffee. An agent doesn't get distracted. An agent doesn't get coffee. An agent finishes the task, starts the next one, and keeps going until something stops it. Nothing is stopping it.

Great.

Everyone else patched the old model. We threw it out.

The rest of the industry looked at the agent security problem and reached for what they already had. Patch the token. Extend the lifecycle management. Add "agentic" to the product page. Ship it.

Beyond Identity spent years doing one thing: eliminating phishable credentials for humans. No passwords. No OTPs. Identity anchored in hardware, bound to a verified device, impossible to steal and replay. When agents started showing up inside our own engineering org we didn't ask how to retrofit that model onto autonomous software. We asked what it would look like if you built the whole thing over, starting from the machine, starting from the moment before the agent acts.

That's Ceros. It sits on the device, right next to the agent. Every tool call, every file read, every shell command gets bound to a verified human on a verified device and signed with a hardware-rooted cryptographic key before it executes. Not logged after. Bound before. Your policies enforced at runtime, not in a Confluence doc nobody reads. And when something goes wrong, and something will go wrong, you have a tamper-proof audit trail that tells you exactly who did what, on which device, under which policy, at which moment.

Your agents keep running. Your developers don't change their workflow. And for the first time you can answer the question your board is eventually going to ask. Not "do we have AI agents?" They know you have agents. The question is "do we know what they're doing?"

With Ceros, you do.

Nobody got fired for buying AI. But the year is young, the agents are running, and somewhere in your org right now an autonomous process with root access is finishing the task. It's not getting a coffee first.

Great.

Get started with Ceros

Two commands. Thirty seconds. You'll never look at your agents the same way again.

npm install -g @beyondidentity/ceros-cli
ceros claude

Beyond Identity built Ceros to secure AI agents at the layer that matters, the machine. We're in public beta. Come find out what your agents have been doing.

Ceros