Why Traditional Identity Systems Are Insufficient in the Age of Autonomous AI Agents

Written by
Sarah Cecchetti
Published on
November 25, 2025

Key Takeaways

  • Architectural Mismatch: Traditional identity systems, designed for human-in-the-loop sessions, are incompatible with the agentic architecture of autonomous agents that operate independently for extended periods.
  • New Attack Surface: Agentic AI security is a new frontier. Protocols like the Model Context Protocol (MCP) create vulnerabilities, including mandate spoofing and prompt injection, that legacy security models cannot address.
  • Delegation & Supply Chain Risk: Long-lived, chained delegations of authority across multi-agent systems eliminate real-time human oversight. A single compromised token or a malicious MCP server can lead to widespread security failure.
  • Urgent Need for New Primitives: Incremental fixes are not enough. The industry needs a new generation of identity and authority protocols designed for a machine-to-machine world, focusing on verifiable access control for AI agents.

Why Are Traditional Identity Systems Failing Agentic AI?

The internet is undergoing a profound shift: we are moving from a world of human-driven interactions to one dominated by autonomous AI agents that act independently, negotiating deals, executing tasks, managing resources, spending money, and maintaining persistent contexts on behalf of users. 

Traditional identity providers and protocols, built for browser-based logins and user-consent flows, were engineered around a core assumption that a human would always be present to review permissions, approve scopes, and revoke access if something felt wrong. 

In the emerging agentic era, however, authority is delegated for hours or days at a time, often chained across dozens of independent agents, with no realistic way for a person to stay in the loop. 

This architectural mismatch is no longer theoretical. It is already producing exploitable gaps, and the protocols being built today to enable agent context and commerce are the first places those cracks are becoming visible.

How Does AI Tool Use Create New Security Risks?

The fundamental problem lies in the new attack surface created by tool use and chained delegations. 

Traditional protocols rely on short-lived access tokens, coarse-grained scopes, and the expectation that a user can spot and revoke suspicious activity. In agentic systems, however, a single delegated capability can be handed off from one agent to another for hours or days, propagating silently across independent services with no human in the loop to notice anomalies. 

A clear early example is the Model Context Protocol (MCP): an AI agent uses MCP to retain state and invoke tools across turns. Its bearer token effectively grants authority to everything the user can do. If that token is stolen, if the MCP server is compromised, or if a prompt injection tricks the agent into misusing its tools, the attacker inherits the full power of the original delegation, with no per-action consent screen and no practical way for the user to interrupt the damage in real time. 

What worked for a 15-minute OAuth session in a browser is inadequate when authority lives for days inside autonomous software. Even in controlled development environments like Claude Code, where developers leverage AI for code writing and execution, the interaction with external tools via MCP introduces significant security vulnerabilities. A developer might use a seemingly harmless tool that has been compromised, creating a supply chain risk that bypasses traditional security reviews.

What Identity Risks Emerge from Agent-to-Agent Commerce?

The challenges grow even more pronounced when agents begin handling commerce and economic interactions with one another. 

In systems using Agent Commerce Protocol (ACP), autonomous agents are expected to negotiate prices, execute payments, transfer assets, and form temporary alliances without waiting for human approval at every step. Here, identity is no longer just about proving “who you are,” but about carrying a precise, verifiable mandate—what exact actions you are allowed to perform, under what conditions, and for how long. 

Traditional identity protocols were never designed for this. 

These new protocols provide coarse access tokens that agents then pass peer-to-peer in untrusted environments, creating risks like mandate spoofing, session smuggling, and rapid privilege escalation across an entire mesh of agents. A single over-scoped or stolen token can be relayed through dozens of intermediaries, turning a narrowly intended delegation into widespread liability, all while central identity providers remain oblivious and unable to enforce boundaries in real time.

Why Can't Existing Security Stacks Solve This Problem?

Even modern improvements to the traditional stack—things like advanced financial API security profiles, token binding, or richer authorization protocols—still operate within a client-server model that assumes a one-way communication channel. They cannot meaningfully address the structural gaps that appear in agentic AI systems. 

Real-time revocation rarely propagates across a dynamic mesh of independent agents. 

Proxy agents inevitably open “confused deputy” vulnerabilities when one piece of software acts on behalf of another. Long chains of delegation become impossible to audit after the fact, and coarse scopes that were tolerable for human sessions turn into open-ended liability when held by autonomous code. 

Investments in centralized identity providers, web application firewalls, and traditional monitoring tools were optimized for credential theft, phishing, and human insider threats—not for logic-level exploits in agent protocols or economic attacks against software principals.

Patching the old foundations with incremental fixes will not close these gaps; the underlying assumptions are simply no longer valid.

What Is The Path Forward for Securing Agentic Systems?

The industry now faces a clear choice. We can keep treating agentic identity as an edge case and hope incremental improvements will suffice, or we can begin investing in purpose-built primitives that are designed from the ground up for a world in which the primary actors are no longer people, but software agents acting with real economic and operational power. 

The longer we delay that shift, the larger and more expensive the inevitable failures will become. The agentic future is arriving quickly; our identity infrastructure needs to arrive with it.

Frequently Asked Questions (FAQs)

What is an autonomous AI agent?

An autonomous AI agent is a software program, often powered by Large Language Models (LLMs), that can act independently and persistently on behalf of a user. Unlike simple bots, these agents can be delegated complex tasks and authority, such as managing resources, executing transactions, or performing code execution, and can operate for hours or days without direct human intervention.

What is the Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is a standard that allows an AI agent to retain state and invoke external tools across multiple turns of a conversation. While it enables persistence, its reliance on long-lived bearer tokens creates a significant security risk if the token is compromised, making MCP server security a critical concern.

What is a “confused deputy” vulnerability?

A confused deputy is a vulnerability where a program is tricked by another into misusing its authority. In the context of AI agents, a proxy agent could be manipulated by a malicious actor into performing actions that it is authorized to do, but which the original user never intended.

What is prompt injection?

Prompt injection is an attack where a malicious user crafts input to an LLM agent to make it ignore its previous instructions or perform an unintended action. This can be used to bypass security controls, expose sensitive data, or trick an agent into executing harmful commands.

Sarah Cecchetti