The NIS2 + AI Coverage Gap: When Your Cyber Policy Won't Cover the Incident NIS2 Requires You to Report
NIS2 mandates AI incident reporting for hundreds of thousands of EU entities. But most cyber insurance policies contain silent AI exclusions, sublimits, or ambiguity that leave insureds paying for AI incident response out of pocket — even though NIS2 required them to report the incident in the first place.
NIS2 Article 21 requires every essential and important entity in the EU to implement “security measures regarding artificial intelligence.” NIS2 Article 23 mandates that significant incidents — including those arising from AI system failures — be reported to the national authority within 24 hours, with a full report within one month.
Here is what most brokers and their clients are only discovering when a claim lands: that mandatory AI incident reporting obligation sits inside a cyber insurance landscape where AI-specific coverage is frequently absent, ambiguous, or explicitly excluded.
This is the NIS2 + AI coverage gap.
The Two-Part Problem
Part 1: NIS2 Creates a New Incident Category
The NIS2 Directive, as implemented across member states in 2024-2026, creates a formal incident category that didn’t exist before: AI system failures with reportable impact.
A non-exhaustive list of what NIS2-triggering AI incidents look like in practice:
- An LLM-based decision system in a bank produces discriminatory lending outcomes at scale — reportable under NIS2 as a significant incident affecting service provision
- An autonomous AI agent in a logistics company is compromised and used to exfiltrate customer data — NIS2-reportable under both the AI failure and the resulting personal data breach
- A healthcare AI diagnostic tool malfunction leads to incorrect patient triage decisions — reportable under NIS2 and potentially under the AI Act simultaneously
- A manufacturing company’s AI-controlled production system is disrupted by a ransomware attack — incident response costs multiply as both a traditional cyber event and an AI system failure
These are not hypothetical scenarios. The European Union Agency for Cybersecurity (ENISA) published guidance in late 2025 on AI incident reporting under NIS2, noting that AI system failures were already appearing in incident reports from early adopter sectors including financial services, healthcare, and critical infrastructure operators.
Part 2: Cyber Policies Were Written Before AI Was Material
The standard cyber insurance policy wording was designed for a threat landscape dominated by ransomware, business email compromise, and data breach. AI system failures — particularly failures of AI systems to perform their intended function rather than being breached — sit uncomfortably in existing policy language.
Three patterns emerge from policy review:
Pattern 1: Silent on AI Entirely
Many policies contain no specific reference to AI systems at all. The policy covers “computer systems,” “software,” “networks.” Does an LLM inference endpoint constitute a “computer system” under the policy? Does an AI agent that autonomously executes transactions fall under “software”? Ambiguity here is a broker’s nightmare and an insurer’s optionality.
Pattern 2: AI-Specific Exclusions
Some policies explicitly exclude AI-related losses. Common exclusion language includes:
- Exclusion of losses arising from “the use, deployment, or reliance on artificial intelligence systems”
- Exclusion of “automated decision-making systems” where the automated nature of the decision is the proximate cause of loss
- Exclusion of AI model failures where the model produces incorrect outputs (rather than being compromised)
Pattern 3: Sublimits That Don’t Match Real-World AI Incident Costs
Leading market policies are beginning to address AI risk, but often with sublimits of €100,000-500,000 for AI-specific losses. An AI incident at a mid-market financial institution can easily generate €2-5M in incident response, regulatory defense, and remediation costs — a sublimit that would cover a fraction of the actual loss.
The Coverage Gap in Practice: Three Scenarios
Scenario 1: The Algorithmic Bias Incident
A regional European bank uses an AI system for credit risk scoring. The model, trained on historical data, begins producing systematically higher risk scores for applicants from a particular demographic region. The bank’s AI governance team discovers the bias during a routine audit. Under NIS2 Article 21 (security measures regarding artificial intelligence), this represents a security failure requiring assessment and mitigation.
The bank’s cyber policy covers “data breach” and “regulatory defense costs.” Does it cover:
- The cost of the internal AI governance investigation?
- External AI ethics consultants brought in to assess the model’s harm?
- Regulatory defense when the national supervisory authority opens an investigation into the bank’s AI system?
- The costs of model retraining and validation before redeployment?
If the policy is silent on AI, these costs may not clearly fall within any covered category. The bias incident was not a “cyber attack” in the traditional sense — no threat actor was involved. It was a system failure triggered by the AI system’s design. Many policies’ cyber incident definitions require a “hostile external act” or similar language that an algorithmic bias incident may not satisfy.
Scenario 2: The AI Agent Supply Chain Attack
A manufacturing company’s AI-powered supply chain optimization agent is compromised through a vulnerability in the AI vendor’s API. The attacker uses the agent’s elevated system privileges to move laterally into the manufacturing execution system, resulting in production downtime and data exfiltration.
This scenario involves both a traditional cyber intrusion and an AI-specific attack vector. The cyber policy’s standard coverage should respond to the lateral movement and data exfiltration. But what about:
- The cost of investigating the AI agent’s behavior to determine the full scope of the compromise?
- The cost of rebuilding trust in the AI system before resuming operations?
- Business interruption specifically attributable to the AI system’s downtime, where the AI system itself (not just the connected networks) is the source of the interruption?
AI-specific coverage riders, where they exist, are often priced and structured as add-ons that brokers don’t think to place — or that clients decline due to cost — until an incident makes the gap visible.
Scenario 3: The AI Act + NIS2 Dual Reporting Event
A cloud-based AI service provider experiences a model performance degradation that cascades into service unavailability for its B2B customers. Under NIS2, the provider must report the incident to its national authority. Under the EU AI Act, if the system was a high-risk AI system under Annex III of the AI Act, separate AI Act reporting obligations may also apply.
The provider’s cyber policy covers “business interruption” and “third-party liability.” But the AI Act reporting costs — internal legal counsel, AI governance specialists, documentation of the system’s conformity — are not clearly covered under standard policy language. The NIS2 reporting obligation is now mandatory, but the policy doesn’t fund the compliance activity.
What This Means for Broker Conversations
The NIS2 + AI coverage gap is not a theoretical risk. It is a present reality that brokers should be actively addressing with every client who uses AI systems in-scope of NIS2 — which, by 2026, is the vast majority of medium and large EU enterprises.
Questions Every Broker Should Ask
Before binding or renewing cyber coverage for NIS2 in-scope clients, brokers should obtain clear answers to these questions:
-
Does the policy definition of “cyber incident” include AI system failures that do not involve a threat actor? Some policies require a “security failure caused by an external party” or similar hostile-act language. An AI system that fails due to a software bug or data quality issue may not trigger coverage.
-
Are AI-specific incident response costs covered? AI incident response requires specialized expertise — AI forensics, model debugging, AI governance consultants. Standard IR cost coverage may cap out before these specialist costs are reached.
-
What is the AI sublimit, if any? If the policy has an AI-specific sublimit, does it reflect the realistic cost of an AI incident at the client’s scale?
-
Does the policy cover NIS2 and AI Act regulatory defense costs? Regulatory investigations under NIS2 and the AI Act involve legal costs, AI expert witness costs, and documentation requirements that standard policy language may not clearly cover.
-
Is there a coverage gap between the AI system’s downtime and business interruption? If the AI system itself is the source of revenue generation (e.g., an AI-powered trading system, an AI customer service platform), standard BI coverage may not respond to losses caused by the AI system’s internal failure.
The Product Gap and How Insurers Are Responding
A small number of insurers have begun offering AI-specific coverage endorsements or standalone AI incident response policies. These products typically cover:
- AI forensics and root cause analysis
- AI model restoration or replacement
- Regulatory defense costs under NIS2 and AI Act
- Crisis communications related to AI incidents
Brokers should note that the market for these products is nascent and pricing methodology is not yet standardized. The same adverse selection problem that plagued early cyber insurance is appearing here: insurers who haven’t built AI risk expertise tend to either exclude AI entirely or price it so high that clients decline coverage.
The Practical Implication: Document AI Risk Before an Incident
The brokers who will best serve their clients in 2026 and beyond are those who begin the AI coverage conversation now — before an incident occurs. The documentation produced for a client AI risk assessment does double duty: it supports NIS2 Article 21 compliance demonstration, and it provides the underwriting information needed to place appropriate AI coverage.
A structured AI risk assessment for insurance purposes should document:
- What AI systems are in scope of NIS2 (essential/important entity operations)
- What the AI systems do, what data they access, and what decisions they make autonomously
- What the incident response plan looks like for an AI system failure
- What AI-specific controls are in place (model monitoring, bias testing, human oversight)
- What the realistic financial exposure looks like for an AI incident at this organization
This documentation becomes the basis for underwriting AI risk coverage and for demonstrating NIS2 compliance posture to insurers — a dual-purpose output that supports both the regulatory obligation and the insurance placement.
Bottom Line
NIS2 creates a mandatory incident reporting obligation for AI system failures. Cyber insurance policies, largely written before AI was a material operational risk for most enterprises, often don’t clearly fund that compliance obligation. The gap between regulatory requirement and insurance coverage is real, growing, and only visible in the worst moment — after an incident has already occurred.
Brokers who identify this gap now, place appropriate AI-specific coverage, and help clients document their AI risk posture will differentiate themselves in a market where AI risk is rapidly becoming the defining cyber insurance question of the next three years.
Get the full picture with premium access
In-depth reports, assessment tools, and weekly risk intelligence for cyber professionals.
Pro Membership
Founding member price — lock it in forever
Unlimited reports + tools + alerts
Subscribe Now →Free NIS2 Compliance Checklist
Get the free 15-point PDF checklist + NIS2 compliance tips in your inbox.
No spam. Unsubscribe anytime. Privacy Policy
blog.featured
Why 15% AI Loading Isn't Enough: A Better Way to Price AI Risk
6 min read
Zurich's £8.1B Beazley Acquisition: What It Means for Cyber Insurance's Future
6 min read
NIS2 Penalties Explained: Essential vs Important Entities and What They Mean for Coverage
9 min read
NIS2 Underwriting Questions: What Every Cyber Insurance Broker Should Ask
16 min read
Premium Report
2026 Cyber Risk Landscape Report
24 pages of threat analysis, claims data, and underwriting implications for European cyber insurance.
View Reports →Verwandte Artikel
Agentic Security: What Underwriters Need to Know in 2026
Autonomous AI agents are entering production at scale — and they bring a completely new attack surface that traditional cyber insurance questionnaires weren't designed to capture.
Living-Off-the-Land 2.0: How Autonomous AI Agents Are Weaponizing LOTL Tradecraft — And What It Means for Cyber Underwriting
The convergence of agentic AI and living-off-the-land attack techniques is collapsing three attacker constraints at once: cost, skill, and detectability. A deep analysis of demonstrated capabilities, real incidents, and the underwriting implications that should reshape your risk selection in 2026.
AI in Cyber Underwriting: Attacker, Defender, and Underwriter Perspectives
Exploring how AI transforms cyber risk from three angles: how threat actors weaponize it, how security teams deploy it, and how underwriters must adapt their approach.