AI in Cyber Underwriting: Attacker, Defender, and Underwriter Perspectives

Exploring how AI transforms cyber risk from three angles: how threat actors weaponize it, how security teams deploy it, and how underwriters must adapt their approach.

Exploring how AI transforms cyber risk from three angles: how threat actors weaponize it, how security teams deploy it, and how underwriters must adapt their approach.

Artificial intelligence is reshaping cyber risk at every level. For underwriters trying to assess and price this evolving exposure, understanding AI’s impact requires looking through three distinct lenses: the attacker weaponizing it, the defender deploying it, and our own practice as risk evaluators.

Each perspective tells a different story. Together, they reveal where the real risks lie—and what we should do about them.

The Attacker Perspective: AI as Force Multiplier

Threat actors have been quick to integrate AI into their operations. The results are measurable and concerning for anyone responsible for cyber risk transfer.

Deepfake-enabled business email compromise (BEC) represents one of the most financially damaging applications. Attackers now generate convincing voice and video deepfakes to authorize fraudulent wire transfers. A Hong Kong-based finance worker recently transferred $25 million after a video call with what appeared to be the CFO and other colleagues—all AI-generated. The technology requires minimal technical skill, and open-source tools have democratized access to capabilities once limited to state actors.

Automated vulnerability discovery gives attackers another edge. AI-powered tools scan codebases and identify exploitable vulnerabilities faster than traditional methods. Machine learning models trained on historical vulnerability data predict which code patterns are likely to contain security flaws. While defenders use similar tools, the asymmetry favors attackers: they only need to find one open door, while defenders must secure every entry point.

AI-generated phishing at scale has transformed social engineering from a craft to an industrial operation. Large language models craft personalized phishing emails in multiple languages, referencing real relationships, recent transactions, and current events. These messages bypass traditional filters and fool even trained employees at higher rates than human-written equivalents. Attackers can now conduct reconnaissance at scale, identifying high-value targets and crafting bespoke lures based on digital footprints.

Malware evolution through AI enables polymorphic code that changes its signature with each infection, evading signature-based detection. The pattern is clear: attackers use AI to scale operations, reduce costs, increase success rates, and compress the time between reconnaissance and impact.

The Defender Perspective: AI as Detection and Response Tool

Security teams are responding with their own AI implementations, though adoption curves vary significantly by organization maturity and resource availability.

Automated threat detection uses machine learning to identify anomalies in network traffic, endpoint behavior, and access patterns. Modern systems detect novel attack techniques without prior signatures—a critical capability when attackers constantly develop new methods. User and entity behavior analytics (UEBA) establishes baselines for normal activity and flags deviations indicating compromised credentials or insider threats.

Incident response automation accelerates containment when seconds matter. AI systems can isolate compromised endpoints, revoke credentials, and block malicious IPs within seconds of detection. Orchestration platforms suggest playbooks based on attack patterns and execute approved response actions automatically. This speed matters: the difference between a contained incident and a full breach often comes down to response time measured in minutes.

Predictive security analytics attempts to forecast attacks before they occur. By analyzing threat intelligence feeds, dark web activity, and organizational vulnerabilities, these systems prioritize defensive investments and alert teams to emerging risks. Some advanced security operations centers now use AI to generate daily threat briefings tailored to their specific technology stack.

The challenge for defenders is integration and trust. Building cohesive AI-driven security operations requires significant investment, expertise, and organizational change management. Security teams must validate that AI detection systems don’t generate overwhelming false positives that desensitize analysts. Many organizations remain in early stages of this transformation, creating coverage gaps underwriters need to understand.

The Underwriter Perspective: AI as Risk Assessment Imperative

For underwriters, AI presents both a coverage challenge and an assessment opportunity that demands immediate attention.

Risk assessment automation should be our immediate priority. Traditional underwriting relies on questionnaires completed months before policy inception—static snapshots that become stale quickly. AI can continuously monitor public data sources including security ratings, breach disclosures, dark web mentions, and certificate transparency logs to build dynamic risk profiles reflecting current conditions rather than historical self-assessments.

Claims prediction models help us understand portfolio concentration and expected loss costs with greater precision. By analyzing historical claims patterns alongside organizational characteristics and external threat intelligence, we can identify which risk factors actually correlate with losses rather than relying on industry assumptions.

Portfolio optimization uses AI to balance growth and profitability across market segments. Understanding which industries, company sizes, and security maturity levels generate appropriate returns enables more precise capacity allocation. Real-time monitoring of portfolio concentrations allows underwriters to adjust appetite before catastrophic accumulations develop.

But implementation requires caution. Models trained on biased historical data perpetuate those biases, potentially leading to unfair discrimination. Over-reliance on automated signals without human judgment misses critical context. And regulatory scrutiny of algorithmic decision-making in insurance is increasing, with regulators demanding explainability for AI-driven pricing.

The Asymmetry Problem

Here’s the uncomfortable truth that underwriters must confront: attackers currently benefit more from AI than defenders or underwriters.

Attackers face fewer constraints. They operate across borders with minimal regulatory oversight, sharing tools freely through underground communities without procurement processes or compliance reviews. Success is measured purely in compromised systems and extracted value. Failure carries minimal consequence.

Defenders operate within organizational boundaries, compliance requirements, budget limitations, and change management processes. Security teams must justify investments to finance departments, manage technology transitions without disrupting operations, and maintain service availability while improving security posture.

Underwriters face the slowest cycle. Rate filings, underwriting guidelines, coverage forms, and claims handling procedures change quarterly at best. Many policy wordings have remained essentially unchanged for years. Meanwhile, threat landscapes shift weekly.

This asymmetry creates a coverage gap. The risks we’re pricing today may not reflect the threats generating tomorrow’s claims. Policy language written before AI-enabled attacks became prevalent may create ambiguity about coverage for AI-related losses.

What Underwriters Should Do Now

Closing this gap requires immediate action in three specific areas.

Ask better questions about AI security. Our questionnaires need to evolve beyond generic inquiries about “AI security measures.” Specific, actionable questions matter:

  • Do you have written policies governing employee use of generative AI tools?
  • How do you verify voice and video communications for high-value transactions?
  • What controls protect AI training data from poisoning or unauthorized access?
  • Have you assessed security risks in your AI supply chain?
  • Do you maintain an inventory of AI systems and their data access privileges?

Price AI-related risks explicitly. Organizations using AI systems face distinct exposures that standard cyber policies may not clearly address: model theft, training data poisoning, adversarial attacks, and hallucination-driven errors. We need to understand whether we’re covering these perils, excluding them, or pricing them appropriately. Silence in policy language creates coverage disputes when claims arrive.

Understand AI supply chain risks. Most organizations use AI through third-party services rather than custom-built models. Each provider introduces concentration risk across the portfolio. If a widely-used AI service experiences a security incident, how many insureds are affected simultaneously? This systemic exposure requires portfolio-level monitoring and potentially aggregate limits for specific vendor dependencies.

The Path Forward

AI in cyber underwriting is not a future consideration—it is the current reality. Every policy we write, every risk we assess, every claim we handle now occurs in an environment transformed by artificial intelligence capabilities on all sides.

The underwriters who thrive will be those who understand all three perspectives: how attackers exploit AI’s capabilities for profit and disruption, how defenders deploy their own AI countermeasures with varying effectiveness, and how we must adapt our risk assessment practices to keep pace with this acceleration.

Winning this race requires investment in data, analytics capabilities, and underwriting expertise. It demands closer collaboration between underwriting and claims to identify emerging AI-related loss patterns. It requires ongoing dialogue with insureds about their AI adoption and security practices.

The question is no longer whether AI affects cyber risk. It is whether our underwriting practices evolve fast enough to accurately capture and price that risk before it materializes in our portfolios. The attackers are not waiting. Neither should we.

Go deeper with premium cyber risk reports

Professional-grade analysis, NIS2 compliance guides, and threat intelligence — used by underwriters across Europe.

Single Report

€9 per report

24-48 page professional analysis

Browse Reports →
Best Value

Pro Membership

€49 €19 /month

Founding member price — lock it in forever

Unlimited reports + tools + alerts

Subscribe Now →
30-day money-back
Secure via Stripe
Cancel anytime

Free NIS2 Compliance Checklist

Get the free 15-point PDF checklist + NIS2 compliance tips in your inbox.

No spam. Unsubscribe anytime. Privacy Policy

Featured

NIS2 Penalties Explained: Essential vs Important Entities and What They Mean for Coverage

NIS 2 ·

8 min read

NIS2 Underwriting Questions: What Every Cyber Insurance Broker Should Ask

NIS 2 ·

14 min read

Agentic Security: What Underwriters Need to Know in 2026

Agentic AI ·

8 min read

The NIS2 Audit Crunch: What Underwriters Need to Know Before June 30, 2026

NIS 2 ·

10 min read

Premium Report

2026 Cyber Risk Landscape Report

24 pages of threat analysis, claims data, and underwriting implications for European cyber insurance.

View Reports →

Related posts

Agentic Security: What Underwriters Need to Know in 2026
Agentic AI · · 8 min read

Agentic Security: What Underwriters Need to Know in 2026

Autonomous AI agents are entering production at scale — and they bring a completely new attack surface that traditional cyber insurance questionnaires weren't designed to capture.

How AI Is Changing Cyber Risk Assessment
AI Ops · · 1 min read

How AI Is Changing Cyber Risk Assessment

A look at how AI and multi-agent systems are starting to transform the way we evaluate and underwrite cyber risk.

AI in Cyber Underwriting: Attacker, Defender, and Underwriter Perspectives
AI · · 7 min read

AI in Cyber Underwriting: Attacker, Defender, and Underwriter Perspectives

Exploring how AI transforms cyber risk from three angles: how threat actors weaponize it, how security teams deploy it, and how underwriters must adapt their approach.