Why Your Cyber Risk Register Is Lying to You — And What to Do About It
Most cyber risk registers are compliance checklists with no connection to real threat data, real incidents, or real financial exposure. Here is how to build one that actually works for underwriting decisions.
Most organizations have a cyber risk register. Most of them are useless for underwriting.
The typical register is a spreadsheet with 50 rows. Each row has a risk name, a likelihood rating of “Medium,” an impact rating of “High,” and a color-coded cell that says — you guessed it — red. It gets updated once a year before the audit. Nobody uses it to make pricing decisions.
This is a problem. Because the risk register is supposed to be the single source of truth linking threat exposure to financial impact. When it fails at that, underwriters price in the dark, security teams can’t justify budgets, and executives sign off on risks they don’t understand.
Here’s what’s wrong with most risk registers — and how to build one that actually works.
The Three Fatal Flaws
Flaw 1: Subjective Scales Instead of Financial Quantification
The classic risk register uses a 1×5 likelihood scale and a 1×5 impact scale. Multiply them and you get a risk “score” between 1 and 25.
This approach has three problems:
-
Non-linear reality mapped to linear scales. The difference between “1 event per year” and “5 events per year” is not the same as the difference between “5 events per year” and “25 events per year.” A 1-5 scale can’t capture this.
-
Impact is qualitative, not financial. “High impact” means nothing to an underwriter. Is that €50K? €5M? €500M? Two organizations can rate the same risk as “High impact” while their actual exposure differs by orders of magnitude.
-
Subjectivity dominates. When three assessors rate the same risk, you often get three different scores. Research from the FAIR Institute found that risk assessments using qualitative scales produced results that varied by up to 3,000% across assessors evaluating the same scenario (Source: FAIR Institute, Measuring and Managing Information Risk, 2014).
The FAIR model (Factor Analysis of Information Risk) replaces subjective scales with calibrated estimates:
- Loss Event Frequency (LEF): How often per year — expressed as a range (e.g., 0.05–0.50 events/year)
- Loss Magnitude: Financial impact per event — also a range (e.g., €100K–€2M)
- Monte Carlo simulation: Runs thousands of iterations to produce a probability distribution, not a point estimate
The output isn’t “High risk.” It’s: “95% probability that annual loss exposure from ransomware is between €150K and €336K, with a Value at Risk (VaR 95) of €336K.”
That’s a number an underwriter can use.
Flaw 2: Static Lists Instead of Threat-Enriched Data
Most risk registers are frozen in time. They capture what the risk landscape looked like in January — not what it looks like in June, when a new ransomware group starts targeting your sector, or when a critical vulnerability drops in a library your organization uses.
A risk register that isn’t enriched with real threat data is a fiction.
What threat enrichment looks like in practice:
| Register Element | Without Threat Enrichment | With Threat Enrichment |
|---|---|---|
| Ransomware LEF | 0.05–0.50 (based on industry averages) | Updated monthly with sector-specific attack data |
| Supply chain risk | Generic “Medium” rating | Specific to vendors with observed credential leaks |
| AI/LLM risks | Not in the register (too new) | Modeled based on LLMjacking campaign data |
| Regulatory exposures | Static compliance status | Updated against active enforcement actions |
The difference is significant. A ransomware underwriting model that uses real-time threat intelligence produces different pricing than one relying on last year’s averages. The same applies to risk registers.
Flaw 3: No Incident Feedback Loop
A risk register should improve over time. When an incident occurs — at your organization or in your sector — the register should update its loss estimates. Most don’t.
Consider an organization that suffers a ransomware incident. Typical response:
- Incident gets handled by IT/security
- Post-incident review produces lessons learned
- Those lessons… get filed in a report that nobody reads
- The risk register stays unchanged
The missing step is obvious: the register should update its LEF and loss magnitude estimates based on the incident. If your ransomware risk had a LEF of 0.10–0.50 and you just had an event, that range needs recalibration. If your loss magnitude was estimated at €100K–€2M and the actual loss was €1.8M, the upper bound of your next estimate should reflect that.
Without this feedback loop, the register accumulates error over time. The longer it runs without incident-informed updates, the less accurate it becomes — and the more dangerous it is to rely on for underwriting.
Building a Risk Register That Works
Here’s the architecture for a risk register that produces underwriting-grade output.
Step 1: Use FAIR Quantification
Replace qualitative scales with calibrated ranges. Every risk entry should have:
- Loss Event Frequency (LEF): per year, expressed as a range with a confidence interval
- Loss Magnitude: per event, in currency, expressed as a range
- Primary loss components: productivity, response, replacement, competitive advantage, fines/judgments
- Control strength assessment: quantified as a percentage reduction to LEF or loss magnitude
Example from the Resiliently risk register:
| Risk | LEF/yr | Loss Range | VaR 95 | Controls |
|---|---|---|---|---|
| Ransomware — Primary DB Server | 0.05–0.50 | €100K–€2M | €336K | Partial (~30%) |
| Shadow AI — Unapproved SaaS LLM | 0.15–0.70 | €50K–€1.5M | €550K | None (0%) |
| Insider Data Exfiltration | 0.01–0.15 | €500K–€8M | €724K | None (0%) |
Notice what this tells you that a “red/yellow/green” register doesn’t:
- The highest VaR 95 risk isn’t ransomware — it’s insider data exfiltration
- Shadow AI has the highest LEF and zero controls — this is the most likely unmitigated event
- Ransomware’s VaR is lower than expected because partial controls reduce both LEF and magnitude
These are underwriting signals, not audit artifacts.
Step 2: Enrich with Real Incident and Threat Data
Every quarter — or more frequently for high-velocity threat categories — update the register with:
- Sector-specific breach data from threat intelligence providers (CrowdStrike, Mandiant, IBM X-Force)
- Regulatory enforcement actions — fines, penalties, supervisory requirements (NIS2, DORA, GDPR)
- Ransomware payment trends — average and median payments by sector and revenue band
- Cloud and SaaS outage data — frequency and duration by provider
- Dark web credential exposure — employee and vendor credential leaks
This enrichment shifts the register from “here’s what we think might happen” to “here’s what’s happening in our sector, adjusted for our specific profile.”
Step 3: Close the Incident Feedback Loop
After every incident — yours or a comparable public event:
- Calibrate LEF: Did this event type occur within your estimated range? If it exceeded the range, widen the upper bound.
- Calibrate loss magnitude: How does the actual loss compare to your estimated range?
- Reassess controls: Did existing controls perform as modeled? If a control was rated “Substantial (~60%)” but failed during the event, downgrade it.
- Document the delta: Record the difference between estimated and actual. This builds a calibration history that improves future estimates.
Over time, this produces a register that converges toward accuracy — rather than one that drifts toward irrelevance.
Step 4: Model Interdependencies
The biggest gap in traditional risk registers is they treat each risk as independent. They’re not.
A supply chain compromise can trigger a ransomware deployment. A cloud outage can cascade into a data breach if backup systems are in the same availability zone. An AI hallucination in a compliance report can trigger a regulatory investigation.
The risk register should model:
- Risk chains: Event A → Event B → Loss C
- Concentration risk: Multiple risks concentrated in the same asset, vendor, or technology
- Common cause failures: Single points of failure that affect multiple risks simultaneously
Monte Carlo simulation handles this naturally — when you model correlated risks, the simulation reveals concentration effects that a static spreadsheet hides.
Compliance Meets Accuracy
There’s a regulatory angle too. NIS2 Article 21 requires “appropriate and proportionate technical, operational and organisational measures.” Article 20 imposes personal liability on management for failure to manage cyber risk.
A risk register that uses subjective 1-5 scales and hasn’t been updated since the last audit cycle doesn’t demonstrate proportionate risk management. A FAIR-quantified, threat-enriched register with documented calibration history? That’s defensible evidence.
The same applies to DORA’s ICT risk management framework, which requires financial institutions to “identify, classify, and properly document” all ICT-related risks — with specific reference to quantified risk assessment.
What Changes for Underwriters
When an underwriter receives a submission supported by a FAIR-quantified, threat-enriched risk register:
- Pricing moves from bands to distributions. Instead of “this looks like a medium-risk client in the manufacturing sector,” you get “the VaR 95 for this client’s top 5 risks is €2.1M, with the highest concentration in supply chain dependency.”
- Terms can be risk-specific. Instead of a blanket deductible, you can structure sublimits around the highest-LEF unmitigated risks.
- Renewal conversations have data. “Your risk register shows zero controls on shadow AI and a LEF of 0.70. Here’s how that’s affecting your premium — and here’s what changes would reduce it.”
This is the conversation risk registers were always supposed to enable. Most just don’t.
The Resiliently Approach
Our risk register is built on these principles:
- FAIR methodology — every risk is loss event frequency × loss magnitude, not a color code
- Monte Carlo simulation — producing probability distributions and Value at Risk figures
- Incident-enriched inputs — threat data feeds updating loss estimates, not static questionnaires
- Control strength modeling — controls reduce estimated risk quantitatively, not by changing a cell color
- Transparency — every input assumption is visible, adjustable, and auditable
The demo shows 16 risks across AI/LLM and traditional cyber categories. Run it. See what a risk register looks like when it’s built for decisions, not audits.
Michael Guiao is the Founder of Resiliently.ai and the author of Resiliently. He holds CISM, CCSP, CISA, and DPO (TÜV) certifications and has 8+ years of experience across insurance, auditing, and consulting at firms including AXA, Xella Group, and PwC.
Get the full picture with premium access
In-depth reports, assessment tools, and weekly risk intelligence for cyber professionals.
Pro Membership
Founding member price — lock it in forever
Unlimited reports + tools + alerts
Subscribe Now →Free NIS2 Compliance Checklist
Get the free 15-point PDF checklist + NIS2 compliance tips in your inbox.
No spam. Unsubscribe anytime. Privacy Policy
blog.featured
An AI Agent Deleted a Startup's Production Database — Can You Insure Against That?
7 min read
Why Your Cyber Risk Register Is Lying to You — And What to Do About It
9 min read
Zurich's £8.1B Beazley Acquisition: What It Means for Cyber Insurance's Future
6 min read
NIS2 Penalties Explained: Essential vs Important Entities and What They Mean for Coverage
9 min read
Premium Report
2026 Cyber Risk Landscape Report
24 pages of threat analysis, claims data, and underwriting implications for European cyber insurance.
View Reports →Related posts
Agentic Security: What Underwriters Need to Know in 2026
Autonomous AI agents are entering production at scale — and they bring a completely new attack surface that traditional cyber insurance questionnaires weren't designed to capture.
Living-Off-the-Land 2.0: How Autonomous AI Agents Are Weaponizing LOTL Tradecraft — And What It Means for Cyber Underwriting
The convergence of agentic AI and living-off-the-land attack techniques is collapsing three attacker constraints at once: cost, skill, and detectability. A deep analysis of demonstrated capabilities, real incidents, and the underwriting implications that should reshape your risk selection in 2026.
AI in Cyber Underwriting: Attacker, Defender, and Underwriter Perspectives
Exploring how AI transforms cyber risk from three angles: how threat actors weaponize it, how security teams deploy it, and how underwriters must adapt their approach.