P(doom)
P(doom) is the subjective probability that AI causes existentially catastrophic outcomes for humanity. The metric has evolved from informal forum slang into a serious tool used by researchers, policymakers, and industry leaders to express their assessment of AI existential-risk. The AI Safety Atlas devotes an appendix to it (Ch.2 appendix on quantifying existential risk).
What “Doom” Encompasses
The term varies by user but generally includes:
- Human extinction
- Permanent disempowerment of humanity
- Civilizational collapse from which we cannot recover full potential
Definitions sometimes specify timeframes (50 years, 100 years), sometimes don’t. Definitions sometimes include catastrophic-but-recoverable outcomes, sometimes don’t. This inconsistency is one of the metric’s biggest limitations.
Why P(doom) Is Inherently Uncertain
Three structural challenges:
- No historical data — unlike most risk assessments, no empirical base rate for AI-driven extinction exists
- Reliance on theory and judgment about scenarios that have never occurred
- No standard methodology — each estimate reflects subjective assessment of timelines, alignment difficulty, governance, failure modes
This makes P(doom) less like a probability estimate from epidemiology and more like an expert prior — useful as input to decisions, not as an objective measurement.
Range of Expert Estimates
A 2023 survey: AI researchers’ mean estimate of extinction risk in 100 years = 14.4%. Individual estimates span almost the full probability range:
| Researcher | P(doom) | Wiki page |
|---|---|---|
| Roman Yampolskiy | 99.9% | roman-yampolskiy |
| Eliezer Yudkowsky | >95% | eliezer-yudkowsky |
| Dan Hendrycks | >80% | — |
| Holden Karnofsky | 50% | holden-karnofsky |
| Paul Christiano | 46% | paul-christiano |
| Dario Amodei | 10–25% | — |
| Yoshua Bengio | 20% | yoshua-bengio |
| Geoffrey Hinton | 10–20% | geoffrey-hinton |
| Elon Musk | 10–30% | — |
| Vitalik Buterin | 10% | — |
| Yann LeCun | <0.01% | — |
| Marc Andreessen | 0% | — |
The wide variation is itself informative — knowledgeable experts using the same data reach radically different conclusions.
Use of the Metric
Despite its limitations, P(doom) plays useful roles:
- Personal calibration — forces individuals to commit to a probability rather than vague concern levels
- Comparative benchmarking — disagreements become tractable when both parties name a number
- Policy input — “the substantial probability mass that knowledgeable experts place on catastrophic risks — including those who developed the AI systems creating these risks — suggests the risk scenarios deserve serious attention rather than dismissal as science fiction.”
- Resource allocation argument — even modest P(doom) implies expected-value cases for safety investment
Critiques
- Estimate inflation/deflation cycles — public P(doom) numbers respond to social pressure within communities, not just evidence
- Definition smuggling — claims of low P(doom) sometimes redefine “doom” narrowly
- Calibration impossibility — there’s no feedback loop to calibrate P(doom) estimates against outcomes (the outcome is unique and irreversible)
- Performative function — public P(doom) statements often serve communicative roles (signaling membership, reassuring stakeholders) more than epistemic ones
Connection to Wiki
- existential-risk — P(doom) is the quantified version
- ai-risk-arguments — debate over P(doom) is largely debate over the underlying arguments
- atlas-ch1-capabilities-08-appendix-expert-surveys — the qualitative quote-based companion to this appendix
- summary-bostrom-ai-expert-survey — the 2014 Müller-Bostrom survey precedes the formalization of P(doom) but uses similar methodology
- 2501.04064v1 — Swoboda et al. critique specific arguments that drive low P(doom)
- ben-garfinkel — Garfinkel’s skepticism implies a low P(doom) without formally giving a number
- Individual entity pages can cite estimates from this metric
Related Pages
- existential-risk
- ai-risk-arguments
- risk-decomposition
- eliezer-yudkowsky
- paul-christiano
- holden-karnofsky
- yoshua-bengio
- geoffrey-hinton
- roman-yampolskiy
- ben-garfinkel
- atlas-ch2-risks-09-appendix-quantifying-existential-risks
- atlas-ch1-capabilities-08-appendix-expert-surveys
- summary-bostrom-ai-expert-survey
- 2501.04064v1
- ai-safety-atlas-textbook
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- AI Safety Atlas Ch.1 — Appendix: Expert Surveys — referenced as
[[atlas-ch1-capabilities-08-appendix-expert-surveys]] - AI Safety Atlas Ch.2 — Appendix: Quantifying Existential Risks — referenced as
[[atlas-ch2-risks-09-appendix-quantifying-existential-risks]] - Examining Popular Arguments Against AI Existential Risk: A Philosophical Analysis — referenced as
[[2501.04064v1]]