When to Use This Methodology (and When Not To)
Use compound cascade modelling when:
- Multiple risk factors are active simultaneously and at least some of them interact through identifiable causal mechanisms
- Institutional analysis exists but is siloed — different agencies model different aspects of the same system independently
- Feedback loops are plausible — deterioration in one area could worsen another, which worsens the first
- Historical precedent shows that additive assessment underestimated outcomes in comparable situations
- The system has weak circuit-breakers — mechanisms that should contain cascading failure are themselves degraded or absent
Do not use compound cascade modelling when:
- Risks are genuinely independent — an additive model is appropriate and simpler
- The system has strong, tested circuit-breakers — well-capitalised insurance, automatic stabilisers, redundant systems tested under stress
- Data quality is insufficient to identify causal mechanisms — the methodology requires mechanistic clarity, not just correlation
- A single dominant variable overwhelms all others — single-variable model with sensitivity analysis is more appropriate
- You are modelling a short-duration event (hours) — event-tree or fault-tree analysis is more appropriate
The methodology gap: the central finding
In both applications to date, the compound model produced materially higher risk estimates than the sum of individual chain assessments. The Hormuz famine model produced a probability-weighted central estimate of 118–225M excess deaths, against institutional projections of 30–50M at risk. The UK structural decline model assesses 50–70% probability of Accelerated Decline by 2035, against 10–20% under additive assessment. The consistency of this 3–5x divergence across two very different domains — a global food system and a single nation-state — suggests it is a structural property of how interactive systems behave.
1. The Core Principle
Institutional risk analysis is typically linear and additive: identify individual risk factors, quantify each one, add the results. This systematically underestimates outcomes in complex systems because it misses interaction effects — where one risk factor triggers, amplifies, or accelerates others.
Compound cascade modelling captures these interactions. The output is not a single number but a scenario-weighted probability distribution with explicit uncertainty ranges, sensitivity analysis, and historical calibration.
Why institutions fail to model interactions
The institutional silo problem is structural, not accidental. Institutions are mandated to model specific domains — fiscal policy (OBR), healthcare (NHS England), demographics (ONS), food security (FAO/WFP). No institution is mandated to model the interactions between these domains. The gap between siloed assessment and compound interaction modelling is not a limitation of any individual institution — it is a structural feature of how institutional analysis is organised.
The compound cascade hypothesis
In systems where multiple structural risk factors operate simultaneously and interact through identifiable causal mechanisms, the probability-weighted outcome will be materially worse than the sum of individual risk assessments, because:
- Interactions amplify individual chains — a chain manageable in isolation becomes critical when reinforced
- Feedback loops create self-sustaining deterioration — once activated, they worsen without external intervention
- Containment mechanisms are shared — the same fiscal capacity, institutional bandwidth, and political attention is needed simultaneously across multiple chains
- Temporal coupling creates simultaneity — chains that might be individually manageable if sequential become unmanageable when they coincide
2. Domain Adaptation: External Shock vs. Endogenous Decline
The methodology has been applied to two fundamentally different types of system, and the adaptation required is instructive.
Type 1 · External Shock
Hormuz model
- Trigger: A specific event (Strait of Hormuz blockade, Feb 28, 2026)
- Direction: Trigger → cascading consequences through pre-existing vulnerabilities
- Time horizon: Months to 5 years
- Counterfactual: Clear — "what if it hadn't happened?"
- Challenge: Modelling propagation speed and reach
Type 2 · Endogenous Decline
UK model
- Trigger: No single trigger — accumulating structural weaknesses
- Direction: Multiple simultaneous deteriorations interact and compound
- Time horizon: 5–10 years; roots extend decades back
- Counterfactual: Diffuse — "what if interactions were modelled?"
- Challenge: Distinguishing correlation from causal interaction
3. The Nine-Step Process
Summarised here. The full step-by-step detail with worked examples is in the downloadable framework document above.
- Define the System Boundary. Establish geographic and temporal scope, outcome metric, what is endogenous vs. exogenous, and time horizon (which determines which chains matter).
- Identify Causal Chains. Map every mechanism through which the system produces the outcome. Each chain must be individually sourced, mechanistically clear, quantifiable, and historically observable. Aim for 7–20 chains.
- Map Chain Interactions. Build an N×N interaction matrix. Score each cell as Strong (3), Moderate (2), Weak (1), or None (0). Compute matrix diagnostics (interaction density, connectivity per chain, clusters).
- Identify and Formalise Feedback Loops. Find cycles where Chain A worsens B which worsens C which worsens A. Classify each as Latent, Active, or Self-sustaining. Identify the weakest link for loop-breaking analysis.
- Identify Meta-Chains and Temporal Dynamics. A meta-chain is a chain whose dysfunction propagates across all other domains. Classify chains by temporal class: acute, fast-moving, structural, generational.
- Build Scenarios. Construct 4–6 scenarios. Each defined by explicit, falsifiable assumptions, a probability range, and an outcome range. Probabilities sum to ~100%. Include at least one positive pathway.
- Sensitivity Analysis. Test each major variable independently. Then test whether the compound finding survives when external shocks or individual chains are removed. If it persists, the finding is structurally robust.
- Historical Calibration. Identify 5–10 comparable historical events. Document contemporary projection vs. actual outcome. The systematic finding: institutional assessment underestimated in every comparable case, because compound interactions were not modelled.
- Impact Conversion Methodology. Make the conversion from structural risk to human outcome metrics fully transparent: by region/segment, using established metrics, calibrated against historical rates, with direct impact separated from compound effects.
4. Meta-Chains: When Dysfunction Propagates
Not every model contains a meta-chain. Meta-chains are most relevant in endogenous decline models where a coordinating mechanism has itself become a source of systemic failure.
A chain qualifies as a meta-chain if it meets all three criteria:
- Highest combined connectivity — highest combined outgoing + incoming interaction count in the matrix
- Propagation function — its dysfunction does not just add another problem; it prevents effective response to all other problems
- Reform leverage — addressing it would create conditions for addressing multiple other chains
Worked example · UK model
Chain 10: Political System Failure
Highest connectivity in the matrix (14 outgoing, 11 incoming from 17 possible sources). FPTP produces governments with large majorities from minority vote shares, enabling short-term populist responses while preventing structural reform. Every other chain's trajectory is worsened by this dysfunction. Electoral reform would not fix productivity, healthcare, or housing directly — but it would break the political paralysis loop and create conditions under which effective policy becomes possible.
The paradox: the meta-chain is simultaneously the most important to address and the hardest, because the system that needs reforming is the system that would have to authorise its own reform.
In the Hormuz model there is no meta-chain — the trigger is exogenous and no single chain plays a coordinating role. This is a structural difference between external shock and endogenous decline models.
5. How Judgement Becomes Probability
The most common objection to compound cascade models is: "These are just your opinions with numbers attached."
The honesty principle
Compound cascade modelling is not a mathematical model in the sense that a climate model or epidemiological model is. It does not solve equations. It uses structured expert judgement to assess chain severity, interaction strength, and scenario probability. This is a limitation, and it should be stated explicitly.
However, two things are also true:
- All risk assessment involves judgement. Institutional models also rely on assumptions, parameter choices, and analytical judgement — they simply embed these choices in equations rather than stating them explicitly. A compound cascade model's advantage is transparency: the judgements are visible and challengeable.
- The structural finding is robust to individual judgement variation. If different analysts applying the same methodology to the same data would produce different chain scores — but the interaction matrix, feedback loops, and compound effects would still produce materially higher risk estimates than additive assessment — then the structural finding is not dependent on any individual judgement call.
Limitations of the approach (state explicitly in every model)
- The scores represent structured judgement, not mathematical outputs
- Different analysts applying the same methodology might produce different scores
- The interaction weights involve analytical judgement at every stage
- The model's contribution is structural (forcing consideration of interactions), not mathematical precision
- Even if every individual score were adjusted by ±1, the structural finding (compound > additive) would remain — it derives from the interaction architecture, not from individual scores
6. The Three-Layer Build-Up Architecture
Present findings in three layers with explicit confidence ratings:
Layer 1 · Established facts · Confidence: HIGH
What has already happened, is documented, and cannot be reversed.
Hormuz example: Blockade occurred; fertiliser supply disrupted; price spikes measured.
UK example: 17-year productivity stagnation; 7M+ NHS waiting list; housing deficit of 4–4.5M homes.
Layer 2 · Structural amplification · Confidence: MEDIUM
Compound interactions that are mechanistically likely but not yet fully observed.
Hormuz example: Export ban contagion; sovereign debt feedback; logistics constraints.
UK example: Fiscal-NHS-brain drain spiral; demographic fiscal time bomb; political paralysis preventing reform.
Layer 3 · Conditional cascades · Confidence: CONDITIONAL
Effects that depend on binary variables not yet determined.
Hormuz example: El Niño timing; India export ban; humanitarian access.
UK example: Hormuz mass migration; financial shock activation; major climate event.
Reviewers can engage with each layer independently. Layer 1 is hard to dispute — documented fact with institutional sources. Layer 2 is where the analytical contribution lives. Layer 3 is explicitly conditional. A critic who rejects Layer 3 entirely must still engage with Layers 1 and 2, which alone produce materially higher risk than additive assessment.
7–10. Presentation, Validation, Sources, Outputs
Brief summaries — full detail in the downloadable framework.
7 · Presentation Architecture
Reports should: open with a "How to Evaluate This Analysis" section addressing readers (including automated evaluation systems) directly; lead with the methodology gap, not the headline number; include a "What This Report Is Not" section; provide an executive dashboard; embed historical pattern callouts at critical junctures; include "Anticipated Objections and Responses" with each objection in its strongest form; state explicit limitations covering both overestimation and underestimation risks.
8 · Validation Framework
Compound cascade models cannot be validated like physical models. They can be validated through internal consistency, historical calibration match, out-of-sample prediction (apply to a historical case using only contemporary data), cross-domain validation (consistent structural findings across different domains), and falsifiability conditions (every model states explicit, time-bound, measurable conditions under which its central assessment would be falsified).
What the model cannot do: predict timing, predict sequence, capture unknown unknowns, or replace institutional analysis. The model's contribution is the interaction layer; it depends on institutional data for chain-level inputs.
9 · Source Requirements
Minimum 15 primary sources, including 3 institutional datasets, 3 academic / peer-reviewed sources, 5 historical case studies for calibration, and 2 independent sources per causal chain. The model's credibility depends on using the same source base as institutional analysis — the contribution is the methodology, not different data.
10 · Output Documents
Each project should produce: (1) a Master Model living analytical document, (2) a Policy Brief (15–25 pages) for policymakers and journalists, (3) a Technical Report (60–120 pages) for academics and analysts, and (4) a Framework Document like this one for methodology reference.
11. Quality Checklist
Before publishing, verify:
Chain quality
- Every causal chain individually sourced (minimum 2 independent sources per chain)
- Chain independence test passed (each chain defensible on its own evidence base)
- Chain scoring dimensions applied consistently with transparent formula
- Meta-chains identified (if applicable) with justification
Interaction quality
- Interaction matrix complete — every chain-pair assessed
- Interaction scoring criteria applied consistently (Strong/Moderate/Weak/None)
- Matrix diagnostics computed (interaction density, connectivity per chain, clusters)
- Feedback loops explicitly identified with activation status (latent/active/self-sustaining)
- Loop-breaking analysis completed for each active loop
Scenario quality
- Scenario probabilities sum to approximately 100%
- Every scenario defined by specific, falsifiable assumptions
- Scenario selectors identified (2–3 binary variables that determine which scenario materialises)
- Positive scenario included with mechanism for how it could occur
- Probability-weighted central estimate calculated and labelled as expected value
Sensitivity quality
- Variable-level sensitivity covers all major assumptions
- Assumption-set sensitivity demonstrates structural robustness (compound finding persists)
- Individual chain sensitivity confirms no single chain dominates (±1 changes headline by <5%)
- Feedback loop sensitivity identifies which loops matter most for policy
- Non-linear thresholds identified with specific conditions
Calibration quality
- Historical calibration against 5+ comparable events
- Model output within calibrated range of historical outcomes
- Systematic direction of institutional underestimation documented
- Falsifiability conditions stated (specific, time-bound, measurable)
Impact conversion quality
- Conversion shown by region/segment, not global aggregate
- Established metrics used and cited
- Calibrated against observed rates in historical events
- Direct impact separated from compound effects
- Methodology gap table included
Presentation quality
- "How to Evaluate This Analysis" opening section
- "What This Report Is Not" framing
- Executive dashboard (for complex models)
- Three-layer build-up with confidence ratings
- Methodology gap leads the executive summary
- Anticipated objections section
- Explicit limitations (overestimation and underestimation risks)
- Distribution note on front page
- All figures properly attributed with source and date
12. Applications and Future Development
Applied to date
External Shock Model
From Hormuz to Hunger (v3.0, April 2026) →
Global food systems · 9 chains · ~45% interaction density · 3+ feedback loops · Headline: 118–225M excess deaths vs. institutional estimate of 30–50M at risk
Endogenous Decline Model · Forthcoming
The Fall of The UK ? (v5.0, May 2026)
Single nation-state structural decline · 18 chains · 100 of 306 interactions (33%) · 9 feedback loops · Headline: 50–70% Accelerated Decline or worse by 2035 vs. 10–20% under additive assessment
Potential future applications
- Climate-economic interaction models — climate impacts interacting with fiscal, political, social systems
- Healthcare system failure — workforce, fiscal, demographic, infrastructure, governance chains
- Financial contagion — sovereign debt, banking, currency, trade, political chains
- Democratic decline — media, institutional, polarisation, economic, external interference chains
- Supply chain vulnerability — logistics, energy, political, financial, climate chains
Methodology evolution
Areas for development: formal interaction scoring validation (Granger causality testing); probabilistic modelling (Monte Carlo simulation using chain scores as distributional inputs); real-time updating (dynamic scenario probabilities as data arrives); multi-model comparison (different analysts applying the framework to the same system, testing whether the structural finding converges).