The $250K Ceiling: What LLMjacking Sublimits Mean for Cyber Brokers
QBE and Beazley just set a precedent with 10% AI sublimits. A $5M cyber policy now means max $250K for LLMjacking. Here's what brokers need to know — and do — before the next renewal.
When QBE and Beazley introduced AI-specific sublimits — typically set around 10% of the policy limit — it sent a clear signal to the market: AI risks are real, and insurers are moving to contain their exposure. But for brokers translating this to clients, the implications are sharper than a simple percentage. A $5 million cyber policy now carries a $250,000 ceiling on LLMjacking losses. Is that enough? For most mid-market clients, the honest answer is: probably not.
What LLMjacking Actually Is
Before diagnosing the coverage gap, brokers need to articulate the risk precisely. LLMjacking — sometimes called AI account hijacking — is the criminal practice of taking over corporate LLM accounts, typically obtained through credential theft or API key compromise. Once inside, attackers run up compute costs, deploy the stolen capacity for their own purposes (crypto mining, spam campaigns, powering other attacks), or resell access on darknet markets.
The Sophos threat intelligence team documented a real-world case: attackers hijacked API credentials for an LLM service and spun up adult-themed chatbots on a victim’s billing account. The compute charges accumulated rapidly. The victim faced a double loss — the direct cost of the stolen compute, and the regulatory and reputational exposure from the content being hosted under their infrastructure.
This isn’t theoretical. Fitch’s 2026 cyber risk outlook put it plainly: “Vulnerabilities in AI systems are likely to outnumber patches for the foreseeable future.” The attack surface is vast, evolving faster than traditional controls can keep pace.
The Broker’s Dilemma: Clients Think They’re Covered
Here’s where the conversation with clients gets uncomfortable. A manufacturing company in Bavaria with a $5M cyber policy has been told they have AI coverage. They may even have paid a premium loading for it. But that 10% sublimit means their LLMjacking exposure — which could easily reach €200,000–€400,000 in compute charges alone, before business interruption or breach response costs — is capped at $250,000.
The gap isn’t always visible in the policy wording. Sublimits can hide in the fine print, and clients rarely discover the shortfall until after an incident. By then, the relationship is damaged.
Some brokers are asking whether these sublimits expand beyond LLMjacking to broader AI threats — prompt injection attacks, model theft, AI-generated output errors causing bodily injury or financial loss. Insurers have not been eager to clarify. AIG’s recent filing to exclude AI losses entirely is the starkest end of the spectrum — but it’s a signal that the market is nowhere near consensus on how to define and bound AI exposure.
Why This Feels Familiar: A Historical Parallel
For brokers with longer tenure, the trajectory is recognizable. Cyber was routinely excluded from general liability policies in the early 2000s. Insurers resisted covering it; brokers had to place standalone cyber policies that took years to become standard. The market eventually matured, but only after significant confusion, coverage litigation, and client losses.
Fitch’s framing suggests AI is on the same arc. The vulnerabilities are outpacing the actuarial data. Insurers are reacting with sublimits and exclusions rather than pricing tools — a blunt but rational response to genuine uncertainty.
The brokers who will differentiate themselves in this market are the ones who understand the sublimit architecture, can identify which clients have material AI exposure, and proactively negotiate terms before the renewal conversation becomes a crisis.
Practical Guidance: Documenting AI Risk to Negotiate Sublimit Exceptions
The good news: sublimits aren’t fixed in stone. With structured documentation of a client’s AI risk profile, brokers can make the case for higher sublimits, reduced loadings, or explicit carve-outs for specific AI use cases. Here’s what that documentation should include:
1. Inventory of AI Assets Map every LLM deployment — internal and third-party. Which vendors are in use? What data do they access? Are they API-only, browser-based, or integrated into core systems? You cannot negotiate effectively if you don’t know what you’re covering.
2. Governance Controls Evidence Document AI-specific policies: is there an AI risk register? Are credentials rotated? Is there Multi-Factor Authentication on LLM accounts? Are API keys stored in a secrets manager? Insurers pricing AI risk are looking for maturity signals — these controls translate directly to lower risk scores.
3. Access and Monitoring Posture Can the client detect unusual LLM usage patterns? Is there logging on API calls? Are there alerts for spikes in compute consumption? Visibility matters as much as control — an insurer wants to know that if something goes wrong, it will be detected quickly rather than after months of accumulated charges.
4. Business Impact Analysis What is the worst-case scenario for an LLMjacking event at this client? Compute theft is one thing. But if the compromised credentials give access to customer data, or the LLM integrates with operational systems, the loss scenario scales rapidly. Quantifying this — even roughly — gives the underwriter a reason to engage on sublimit terms.
The Resiliently Approach: Five Pillars for Structured Assessment
Our underwriting framework evaluates AI risk across five dimensions: deployment context, governance maturity, technical controls, supply chain exposure, and business impact. This structured approach produces a risk score, not a checkbox — and that score is the basis for negotiating sublimit terms.
For brokers in the DACH market, this means you can walk into a renewal conversation with documentation that goes beyond “we use AI responsibly.” You can show a systematic assessment that maps to the insurer’s own risk categories. That’s the basis for a productive negotiation, not just a defensive explanation of why the sublimit exists.
The DACH-Specific Context
German-speaking markets face particular pressures. The EU AI Act introduces regulatory risk that doesn’t exist in other jurisdictions — companies using AI in high-risk categories face mandatory conformity assessments, and insurers are watching how these obligations affect loss scenarios. NIS2 obligations add another layer: operators of essential services face specific cybersecurity duties that extend to AI supply chain risks.
Brokers who understand these regulatory linkages — and can present AI risk in that context — are better positioned to argue for appropriate sublimit terms. The conversation isn’t just about coverage; it’s about how the client’s AI governance reduces the insurer’s expected loss.
A Call to Action for Brokers
The $250,000 ceiling on a $5M policy is a starting point, not a final answer. But you can only negotiate up from a documented position. The brokers who act now — conducting structured AI risk assessments for clients before renewal, building the case for sublimit adjustments based on real evidence of governance maturity — will serve their clients better and build a competitive advantage as this market evolves.
LLMjacking is not a hypothetical. It’s a documented threat with real losses. And right now, the gap between what clients think they’re covered for and what they actually are is wide enough to matter.
Want to learn how to structure AI risk documentation for your next renewal? We’re working with select DACH brokers on AI risk assessment frameworks. Get in touch to discuss how we can support your next renewal conversation.
Get the full picture with premium access
In-depth reports, assessment tools, and weekly risk intelligence for cyber professionals.
Pro Membership
Founding member price — lock it in forever
Unlimited reports + tools + alerts
Subscribe Now →Free NIS2 Compliance Checklist
Get the free 15-point PDF checklist + NIS2 compliance tips in your inbox.
No spam. Unsubscribe anytime. Privacy Policy
blog.featured
Why 15% AI Loading Isn't Enough: A Better Way to Price AI Risk
6 min read
Zurich's £8.1B Beazley Acquisition: What It Means for Cyber Insurance's Future
6 min read
NIS2 Penalties Explained: Essential vs Important Entities and What They Mean for Coverage
9 min read
NIS2 Underwriting Questions: What Every Cyber Insurance Broker Should Ask
16 min read
Premium Report
2026 Cyber Risk Landscape Report
24 pages of threat analysis, claims data, and underwriting implications for European cyber insurance.
View Reports →Verwandte Artikel
AI in Cyber Underwriting: Attacker, Defender, and Underwriter Perspectives
Exploring how AI transforms cyber risk from three angles: how threat actors weaponize it, how security teams deploy it, and how underwriters must adapt their approach.
Why 15% AI Loading Isn't Enough: A Better Way to Price AI Risk
Allianz's blanket 10-15% surcharge on AI-related coverage is a blunt instrument. Here's how systematic, data-driven underwriting offers brokers and insureds a smarter alternative.
Cloud Outage Loss Scenario: When Your Infrastructure Provider Goes Dark
A realistic loss scenario analyzing what happens when a major cloud provider outage strikes — business interruption cascades, insurance triggers, and the coverage gaps that leave policyholders exposed.