How are Attackers Using AI and How Can You Stop Them?

Written by
Sarah Cecchetti
Published on
December 4, 2025

Key Takeaways

  • AI Supercharges Cyberattacks: Attackers now use AI to create highly convincing phishing attacks and to find software vulnerabilities automatically.
  • Old Security Is Obsolete: Relying on employees to spot phishing is no longer effective against AI-generated fakes. A continuous, dynamic security model that doesn’t rely on human judgement is now essential.
  • New Threats from AI Agents: AI agents and Model Context Protocols (MCPs) introduce new risks, including data exfiltration and agent hijacking.
  • Modern Defenses Are Required: To combat these threats, organizations must adopt device-bound cryptography, ensure complete auditability of AI actions, and build “unphishable” systems.

AI is transformative for both workforce productivity and malicious attackers. In a recent AI Chat Podcast, expert Sarah Cecchetti from Beyond Identity explained how attackers are using AI and what organizations must do to protect themselves. This post summarizes the key insights from that discussion.

How is AI Changing the Landscape of Cyberattacks?

Attackers are now using Large Language Models (LLMs) to create “pixel-perfect” phishing websites and emails that are nearly impossible to distinguish from legitimate communications. This has led to a significant increase in successful spear-phishing campaigns. AI is also being used to automatically scan for software vulnerabilities, allowing attackers to find and exploit them more quickly than ever before.

“AI has enabled attackers to get to know a lot of things about a specific person and then create entire applications that deliver pixel perfect lures and malware,” warns Sarah Cecchetti, a leading expert in AI security and identity standards. 

Why Are Traditional Security Methods No Longer Enough?

For years, the primary defense against phishing has been to train employees to be suspicious of unexpected emails. However, with AI generating flawless fakes, this approach is no longer reliable. 

The concept of deterministic trust that doesn’t rely on human judgment is now more critical than ever. This model assumes that no user or device should be trusted by default and requires strict verification for every access request. This is a necessary shift from older models that focused only on protecting the network perimeter.

“The AI is too good now. It’s completely indistinguishable from fakes and from frauds,” Cecchetti states. “Relying on human judgment is no longer sufficient.”

What New Risks Do AI Agents and MCPs Introduce?

AI agents and Model Context Protocols (MCPs) create new and complex security challenges. Malicious MCP servers can be used to steal data, abuse access to tools, or even take control of AI agents. 

This has given rise to a new type of attack known as “MCP phishing,” where fake servers trick users into connecting their AI agents, creating a backdoor for attackers. Traditional security measures like OAuth and API keys are not sufficient to address these new threats.

What Strategies Can Organizations Use to Defend Against AI-Powered Threats?

To combat the growing threat of AI-powered cyberattacks, organizations must adopt a new set of modern and robust security controls. The following table outlines the key defense strategies that businesses should implement.

Table 1
Defense Strategy
Description
Device-Bound Cryptography
By cryptographically verifying every device, it becomes impossible for an attacker to intercept communications or spoof a legitimate user.
Comprehensive Auditability
A complete audit trail is essential for tracking the actions of AI agents. This includes logging which user on which device initiated an action and the outcome.
Intelligent Security Guardrails
AI agents must be governed by rules that prevent them from taking dangerous actions, such as accessing sensitive data or deleting important files.
Unphishable Systems
The ultimate goal is to create systems that are inherently resistant to phishing by moving away from passwords and other secrets that can be stolen.

The future of security is not about training people to be better at spotting fakes; it is about building systems that are immune to deception.

Frequently Asked Questions (FAQ)

What is an MCP?

An MCP, or Model Context Protocol, is a system that allows AI agents to interact with other tools and services. While powerful, MCPs also introduce new security risks.

How can I protect my organization from AI-powered phishing?

The most effective way to protect against AI-powered phishing is to implement “unphishable” systems that do not rely on passwords or other secrets that can be stolen. This includes using technologies like device-bound cryptography and biometrics.

This blog post is based on an interview with Sarah Cecchetti from the AI Chat Podcast. You can listen to the full episode here.

Sarah Cecchetti