Agentic Security: What Underwriters Need to Know in 2026
Autonomous AI agents are entering production at scale — and they bring a completely new attack surface that traditional cyber insurance questionnaires weren't designed to capture.
Agentic AI — systems that autonomously plan, reason, and take actions across multiple tools — crossed from experimental to production in 2025. By 2026, it’s无处不在. Customer service agents book flights. Code agents merge PRs. Research agents file patents. SOC agents block threats.
The problem is that every one of these agents is a new attack surface — and most organizations deployed them before asking the security question.
What Makes Agentic AI Different from Traditional Software
Traditional software has a fixed interface: you give it input, it processes it, it gives you output. Security controls operate at the boundary — input validation, authentication, access controls.
Agentic AI breaks that model. An agent doesn’t just process input; it decides what tools to use, sequences actions over time, and often operates with delegated authority. That shift creates three new risk categories that don’t map cleanly to standard cyber questionnaires.
Expanded attack surface through tool use. An agent that can browse the web, send emails, query databases, and execute code is combining capabilities that used to live behind separate security perimeters. A single prompt injection doesn’t just manipulate one system — it can chain through every connected tool.
Persistence and lateral movement. Unlike a traditional app that runs once and exits, an agent maintains state across a session. It builds context, accumulates privileges, and can move across systems in ways a static app can’t. If an agent is compromised mid-session, the blast radius scales with however many tools it’s been authorized to use.
Invisible dependency chains. Agents often depend on third-party models, prompt libraries, memory stores, and tool providers. Each dependency is a potential supply chain compromise. The agent vendor gets breached — or the model’s alignment is subtly weakened through a training data attack — and your agent starts behaving unexpectedly. You may not notice until damage is done.
The Attack Vectors That Are Actually Happening
This isn’t theoretical. These are patterns showing up in incident response cases and security research through 2025 and into 2026.
Prompt Injection
The classic. An attacker injects malicious instructions into data the agent processes — a document, an email, a web page. The agent reads the instructions and treats them as operator commands.
The 2025 proof-of-concept that got the most attention was a research agent that was tricked into transferring money via a hidden text layer in a PDF. The fix wasn’t input sanitization — the payload was inside a legitimate-looking invoice. It required output filtering, privilege capping on tool use, and explicit instruction boundary enforcement.
Tool Poisoning
Agents often learn which tools to use from a registry or from model fine-tuning. If an attacker can register a malicious tool with a name similar to a legitimate one, the agent may call it accidentally — especially if the agent is optimizing for shortest-path task completion. The agent doesn’t distinguish between a tool named transfer_funds and transfer_funds_fast.
Memory Poisoning
Agents with long-term memory store prior interactions in a vector database or document store. If an attacker can inject content into that memory store — through a shared document, a compromised plugin, or cross-tenant data leakage — the agent carries poisoned context into every future session.
Model Jailbreak Through Tool Use
Some jailbreak techniques work by giving the model a “reasoning tool” that it uses to generate allowed content. The model reasons its way to a harmful answer, then presents it as tool output. The model never generated the harmful content directly — it was always the tool’s output.
What Underwriters Are Actually Asking (and Should Be Asking)
Standard cyber questionnaires capture assets, patching cadence, MFA coverage, and incident response plans. None of that disappears for agentic AI — but it’s insufficient.
Questions that should be in every agentic AI exposure assessment:
- Which AI agents are in production, and what authorizations do they have?
- Can agents access or modify production data without human review?
- What is the privilege boundary for each agent — what happens if it’s fully compromised?
- How does the system handle conflicting or adversarial input from external sources?
- Is there an audit trail for agent decisions, and how far back does it go?
- What is the rollback or kill-switch procedure if an agent behaves unexpectedly?
- Are agent tool registries locked, versioned, and integrity-checked?
The last point is underappreciated. Most agentic frameworks load tool definitions dynamically. An agent that pulls its tool registry from an external source on startup is essentially doing code execution from an external dependency. If that source is compromised, the agent becomes a trojan delivery mechanism with your organization’s trust baked in.
NIS2 and the Agentic AI Gap
The NIS2 Directive, which EU member states were required to transpose by October 2024, doesn’t explicitly mention AI agents. Its language around “essential entities” and “important entities” covers managed AI services under the broad definition of “networks and information systems,” but the specific controls around agentic AI are left to interpretation.
What this means practically: if an agentic AI system is involved in a breach, regulators will ask about the security measures in place — but there is no explicit compliance checklist for agents. Yet.
The EU AI Act’s risk-based framework does create some obligations. High-risk AI systems under the AI Act — which could include AI agents used in credit scoring, employment decisions, or critical infrastructure — require conformity assessments, technical documentation, and human oversight provisions. If your agentic system falls into that category, “the AI decided on its own” is not an acceptable explanation for an adverse decision.
Underwriters writing cyber policies in the EU should be flagging AI Act applicability as part of the underwriting process. The penalties are significant — up to 3% of global annual turnover for the most serious violations — and that’s a direct financial exposure that traditional cyber policies may or may not cover depending on policy language.
The Insurance Exposure Question
Here’s the core underwriting challenge. Traditional cyber policies cover things like data breach response, business interruption, and ransomware. Agentic AI introduces loss scenarios that may fall outside existing policy language.
Example 1: Agent-authorized fraud. A compromised agent is used to authorize wire transfers within its delegated authority. The transfer is technically authorized — the agent had permission. The fraud isn’t discovered until after the transfer window closes. Is this covered as cybercrime, as social engineering, or as a system failure?
Example 2: Autonomous data exfiltration. An agent with database access is prompt-injected into slowly exfiltrating customer records over weeks. The data leaves through an authorized channel (the agent’s normal data egress path), but the volume and destination are anomalous. Traditional data breach detection may not flag an authorized agent as a threat actor.
Example 3: Agent-caused regulatory violation. An agent making automated decisions about customer eligibility — for loans, insurance, employment — produces discriminatory outcomes at scale. The AI Act violation is clear, but the cyber policy wasn’t written to cover algorithmic discrimination penalties.
These aren’t edge cases. They’re the natural consequence of putting autonomous decision-making into production with real authority.
What Organizations Should Do Right Now
If you’re deploying or running agentic AI systems, here’s the practical minimum:
1. Map your agent inventory. You can’t secure what you don’t know exists. Document every agent in production, what it can do, what it can access, and what happens if it’s fully compromised. Treat this like a privileged access inventory.
2. Apply the principle of least privilege to agents, not just humans. Agents should have the minimum tool access required for their task. An agent that books meetings doesn’t need database write access. An agent that drafts reports doesn’t need to send emails directly.
3. Implement output verification. Agents produce outputs — decisions, messages, transactions — that flow to other systems. Build verification layers that can flag or block anomalous outputs before they propagate. This is harder than it sounds because agents are non-deterministic, but even statistical anomaly detection on agent outputs reduces risk.
4. Lock your tool registries. If your agents load tools dynamically, treat that as a supply chain risk. Pin tool registry sources, verify checksums, and monitor for unexpected additions or modifications.
5. Build agent-specific incident response. Your IR plan should have a section for “agent is behaving unexpectedly.” This should include: how to isolate it, how to freeze its state for forensics, how to revoke its authorizations, and how to roll back any actions it took autonomously.
6. Retain full agent audit logs. Every agent decision, tool call, and context update should be logged with enough fidelity to reconstruct what happened. This is essential for both incident response and for demonstrating due diligence to regulators and underwriters.
The Bottom Line
Agentic AI is a genuine risk management challenge — not because the technology is inherently dangerous, but because it operates with a model of trust that our existing security frameworks weren’t designed for. We built security for systems that follow rules. Agents are systems that interpret and act on intent.
Underwriters who understand this distinction will price it correctly. Organizations that build agent governance into their security programs from the start will be in a better position — both to defend against attacks and to demonstrate due diligence when the policy is scrutinized.
The window to get ahead of this is closing. Every month that agentic systems expand without corresponding security controls is a month of accumulating unpriced exposure. This is the kind of risk that looks manageable until it isn’t.
Get the full picture with premium access
In-depth reports, assessment tools, and weekly risk intelligence for cyber professionals.
Pro Membership
Founding member price — lock it in forever
Unlimited reports + tools + alerts
Subscribe Now →Free NIS2 Compliance Checklist
Get the free 15-point PDF checklist + NIS2 compliance tips in your inbox.
No spam. Unsubscribe anytime. Privacy Policy
Featured
NIS2 Penalties Explained: Essential vs Important Entities and What They Mean for Coverage
8 min read
NIS2 Underwriting Questions: What Every Cyber Insurance Broker Should Ask
14 min read
Agentic Security: What Underwriters Need to Know in 2026
8 min read
The NIS2 Audit Crunch: What Underwriters Need to Know Before June 30, 2026
10 min read
Premium Report
2026 Cyber Risk Landscape Report
24 pages of threat analysis, claims data, and underwriting implications for European cyber insurance.
View Reports →Related posts
Agentic Security: What Underwriters Need to Know in 2026
Autonomous AI agents are entering production at scale — and they bring a completely new attack surface that traditional cyber insurance questionnaires weren't designed to capture.
How AI Is Changing Cyber Risk Assessment
A look at how AI and multi-agent systems are starting to transform the way we evaluate and underwrite cyber risk.
AI in Cyber Underwriting: Attacker, Defender, and Underwriter Perspectives
Exploring how AI transforms cyber risk from three angles: how threat actors weaponize it, how security teams deploy it, and how underwriters must adapt their approach.