Section 06

Risk Quantification & FAIR

Risk matrices and 5×5 heat maps create an illusion of precision. This section introduces quantitative risk analysis using FAIR (Factor Analysis of Information Risk), Monte Carlo simulation, and practical techniques for expressing cyber risk in financial terms that executives and boards can act on.

Why Heat Maps Fail

A "High × High = Critical" rating tells you nothing about whether to spend $50K or $5M on mitigation. Ordinal scales cannot be mathematically combined — multiplying "High (4) × Likely (4) = 16" is statistically invalid. Two risks rated "Critical" may differ by orders of magnitude in actual impact. Qualitative risk matrices are useful for initial triage only, not for treatment decisions.

FAIR Taxonomy

FAIR decomposes risk into measurable factors. Every factor is estimated as a range (minimum, most likely, maximum) rather than a single point value.

FAIR Risk Decomposition

flowchart TD RISK["Risk\n(Annual Loss Exposure)"] --> LEF["Loss Event\nFrequency (LEF)"] RISK --> LM["Loss\nMagnitude (LM)"] LEF --> TEF["Threat Event\nFrequency (TEF)"] LEF --> V["Vulnerability\n(Prob. of Success)"] TEF --> CF["Contact\nFrequency"] TEF --> PA["Probability\nof Action"] LM --> PL["Primary\nLoss"] LM --> SL["Secondary\nLoss"] PL --> PI["Productivity\nImpact"] PL --> RR["Response\n& Recovery"] PL --> RA["Replacement\nAsset Value"] SL --> SLF["Secondary Loss\nEvent Frequency"] SL --> SLM["Secondary Loss\nMagnitude"] SLF --> FR["Fines &\nRegulatory"] SLF --> CR["Competitive\nReputation"] style RISK fill:#ff8800,stroke:#000,color:#000 style LEF fill:#22d3ee,stroke:#000,color:#000 style LM fill:#a855f7,stroke:#000,color:#000 style TEF fill:#22d3ee,stroke:#000,color:#000 style V fill:#22d3ee,stroke:#000,color:#000 style PL fill:#a855f7,stroke:#000,color:#000 style SL fill:#ec4899,stroke:#000,color:#000

FAIR Factors Explained

Factor Definition Unit Estimation Technique
TEF How often a threat agent acts against an asset per year Events/year CTI feeds, incident history, industry benchmarks
Vulnerability (V) Probability that the threat event results in loss 0–100% Control effectiveness assessment, pentest results
LEF = TEF × V How many loss events occur per year Events/year Calculated from TEF and V
Primary Loss Direct loss from the event (response cost, downtime, asset value) Currency BIA data, incident response costs, asset valuation
Secondary Loss Indirect loss (fines, reputation, competitive damage) Currency Regulatory penalty tables, customer churn models
ALE Annual Loss Expectancy = LEF × LM Currency/year Monte Carlo simulation over factor ranges

Practical FAIR Scenario

Worked Example: Payment Processing Breach

Scenario: Assess the annual cyber risk of a cardholder data breach on a payment platform processing $50M/month with 2M stored card records.

Step 1: Estimate Threat Event Frequency (TEF)

Based on Verizon DBIR data for payment processors:

  • Minimum: 5 targeted attempts/year (web app + API attacks)
  • Most Likely: 15 targeted attempts/year
  • Maximum: 50 targeted attempts/year (during peak periods)

Step 2: Estimate Vulnerability (Control Effectiveness)

Based on current control posture (WAF, encryption, MFA, segmentation):

  • Minimum: 2% (controls mostly effective)
  • Most Likely: 8% (occasional control gaps)
  • Maximum: 20% (sophisticated attacker, zero-day)

Step 3: Estimate Loss Magnitude

Primary Loss:
  • • Incident response: $200K–$800K
  • • Forensics & remediation: $150K–$500K
  • • Business interruption: $500K–$2M
  • • Card replacement: $5–$25 × records affected
Secondary Loss:
  • • PCI DSS fines: $100K–$500K/month
  • • Regulatory penalties: $200K–$5M
  • • Customer notification: $1–$3/record
  • • Customer churn: 3–7% × $600M annual rev

Monte Carlo Simulation

Instead of multiplying single-point estimates, Monte Carlo simulation runs thousands of iterations using random samples from probability distributions. This produces a risk range with confidence intervals — far more useful for decision-making.

fair_monte_carlo.py
python
"""
FAIR Monte Carlo Risk Simulation
Produces annual loss expectancy with confidence intervals.
"""
import random
from dataclasses import dataclass


@dataclass
class FairInputs:
    """PERT distribution parameters (min, most_likely, max)."""
    tef: tuple  # Threat Event Frequency
    vuln: tuple  # Vulnerability (0-1 range)
    primary_loss: tuple  # Primary Loss Magnitude
    secondary_loss: tuple  # Secondary Loss Magnitude
    secondary_loss_event_freq: tuple  # Probability secondary loss occurs


def pert_sample(low: float, mode: float, high: float, lamb: float = 4) -> float:
    """Sample from a modified PERT distribution."""
    if high == low:
        return mode
    alpha = 1 + lamb * (mode - low) / (high - low)
    beta = 1 + lamb * (high - mode) / (high - low)
    sample = random.betavariate(alpha, beta)
    return low + sample * (high - low)


def simulate_fair(inputs: FairInputs, iterations: int = 50_000) -> dict:
    """Run Monte Carlo simulation for FAIR analysis."""
    annual_losses = []

    for _ in range(iterations):
        # Sample from distributions
        tef = pert_sample(*inputs.tef)
        vuln = pert_sample(*inputs.vuln)
        primary = pert_sample(*inputs.primary_loss)
        secondary = pert_sample(*inputs.secondary_loss)
        sec_freq = pert_sample(*inputs.secondary_loss_event_freq)

        # Calculate Loss Event Frequency
        lef = tef * vuln

        # Calculate Loss Magnitude per event
        # Secondary loss occurs with probability sec_freq
        if random.random() < sec_freq:
            loss_per_event = primary + secondary
        else:
            loss_per_event = primary

        # Annual Loss = LEF * Loss per event
        annual_loss = lef * loss_per_event
        annual_losses.append(annual_loss)

    annual_losses.sort()
    n = len(annual_losses)

    return {
        "mean": sum(annual_losses) / n,
        "median": annual_losses[n // 2],
        "p10": annual_losses[int(n * 0.10)],
        "p90": annual_losses[int(n * 0.90)],
        "p95": annual_losses[int(n * 0.95)],
        "max_observed": annual_losses[-1],
    }


# Payment processing breach scenario
payment_scenario = FairInputs(
    tef=(5, 15, 50),                      # Events/year
    vuln=(0.02, 0.08, 0.20),              # Prob. of success
    primary_loss=(850_000, 2_500_000, 8_000_000),  # USD per event
    secondary_loss=(1_500_000, 8_000_000, 25_000_000),
    secondary_loss_event_freq=(0.3, 0.6, 0.9),
)

results = simulate_fair(payment_scenario)
print("=== FAIR Monte Carlo Results (Payment Breach) ===")
print(f"  Mean ALE:     ${results['mean']:>14,.0f}")
print(f"  Median ALE:   ${results['median']:>14,.0f}")
print(f"  10th %-ile:   ${results['p10']:>14,.0f}")
print(f"  90th %-ile:   ${results['p90']:>14,.0f}")
print(f"  95th %-ile:   ${results['p95']:>14,.0f}")
print(f"  Max Observed: ${results['max_observed']:>14,.0f}")
print()
print("Interpretation: There is a 90% chance annual losses")
print(f"fall between ${results['p10']:,.0f} and ${results['p90']:,.0f}.")
"""
FAIR Monte Carlo Risk Simulation
Produces annual loss expectancy with confidence intervals.
"""
import random
from dataclasses import dataclass


@dataclass
class FairInputs:
    """PERT distribution parameters (min, most_likely, max)."""
    tef: tuple  # Threat Event Frequency
    vuln: tuple  # Vulnerability (0-1 range)
    primary_loss: tuple  # Primary Loss Magnitude
    secondary_loss: tuple  # Secondary Loss Magnitude
    secondary_loss_event_freq: tuple  # Probability secondary loss occurs


def pert_sample(low: float, mode: float, high: float, lamb: float = 4) -> float:
    """Sample from a modified PERT distribution."""
    if high == low:
        return mode
    alpha = 1 + lamb * (mode - low) / (high - low)
    beta = 1 + lamb * (high - mode) / (high - low)
    sample = random.betavariate(alpha, beta)
    return low + sample * (high - low)


def simulate_fair(inputs: FairInputs, iterations: int = 50_000) -> dict:
    """Run Monte Carlo simulation for FAIR analysis."""
    annual_losses = []

    for _ in range(iterations):
        # Sample from distributions
        tef = pert_sample(*inputs.tef)
        vuln = pert_sample(*inputs.vuln)
        primary = pert_sample(*inputs.primary_loss)
        secondary = pert_sample(*inputs.secondary_loss)
        sec_freq = pert_sample(*inputs.secondary_loss_event_freq)

        # Calculate Loss Event Frequency
        lef = tef * vuln

        # Calculate Loss Magnitude per event
        # Secondary loss occurs with probability sec_freq
        if random.random() < sec_freq:
            loss_per_event = primary + secondary
        else:
            loss_per_event = primary

        # Annual Loss = LEF * Loss per event
        annual_loss = lef * loss_per_event
        annual_losses.append(annual_loss)

    annual_losses.sort()
    n = len(annual_losses)

    return {
        "mean": sum(annual_losses) / n,
        "median": annual_losses[n // 2],
        "p10": annual_losses[int(n * 0.10)],
        "p90": annual_losses[int(n * 0.90)],
        "p95": annual_losses[int(n * 0.95)],
        "max_observed": annual_losses[-1],
    }


# Payment processing breach scenario
payment_scenario = FairInputs(
    tef=(5, 15, 50),                      # Events/year
    vuln=(0.02, 0.08, 0.20),              # Prob. of success
    primary_loss=(850_000, 2_500_000, 8_000_000),  # USD per event
    secondary_loss=(1_500_000, 8_000_000, 25_000_000),
    secondary_loss_event_freq=(0.3, 0.6, 0.9),
)

results = simulate_fair(payment_scenario)
print("=== FAIR Monte Carlo Results (Payment Breach) ===")
print(f"  Mean ALE:     ${results['mean']:>14,.0f}")
print(f"  Median ALE:   ${results['median']:>14,.0f}")
print(f"  10th %-ile:   ${results['p10']:>14,.0f}")
print(f"  90th %-ile:   ${results['p90']:>14,.0f}")
print(f"  95th %-ile:   ${results['p95']:>14,.0f}")
print(f"  Max Observed: ${results['max_observed']:>14,.0f}")
print()
print("Interpretation: There is a 90% chance annual losses")
print(f"fall between ${results['p10']:,.0f} and ${results['p90']:,.0f}.")

Quantitative vs Semi-Quantitative vs Qualitative

Approach Output Pros Cons Use When
Qualitative Low/Med/High ratings Fast, intuitive, low data requirements Can't compare risks, can't justify spend Initial triage, small scope
Semi-Quantitative Scored scales (1-25) Structured, repeatable, sortable False precision, ordinal scale math errors Medium maturity, repeatable TRA
Quantitative (FAIR) Dollar ranges w/ confidence Defensible, comparable, ROI-ready Requires data calibration, more effort High-value assets, board reporting

FAIR Tools

FAIR-U

Free online FAIR calculator from the FAIR Institute. Good for single-scenario analysis and learning.

RiskLens

Enterprise FAIR platform with data calibration, benchmarking, and portfolio-level risk analysis. Industry data built in.

OpenFAIR

Open Group standard (O-RA, O-RT). Provides the formal taxonomy and risk analysis methodology specification.

Section Summary

Key Takeaways

  • • Heat maps (5×5 matrices) create false precision — don't multiply ordinal scales
  • • FAIR decomposes risk into measurable factors: TEF × V = LEF; LEF × LM = ALE
  • • Use ranges (min, most likely, max) not point estimates
  • • Monte Carlo simulation produces confidence intervals for decision-making
  • • Express risk in financial terms for executive communication

Next Steps