Summary: Existential Risk Prevention as Global Priority
Author: Nick Bostrom Year: 2013 Source: bostrom-existential-risk-priority.pdf Published in: Global Policy, Volume 4, Issue 1, February 2013
Main Argument
Building on his seminal 2002 paper (summary-bostrom-existential-risks), Bostrom presents a more rigorous economic and moral case that reducing existential-risk should be treated as the dominant consideration for any agent acting out of impersonal concern for humankind. The paper’s central claim is that even tiny reductions in existential risk have enormous expected value — so enormous that existential risk reduction is “strictly more important than any other global public good.”
The argument hinges on the astronomical scale of potential future value. Even the most conservative estimate (confining consideration to biological humans living on Earth) yields at least 10^16 possible future human lives. Less conservative estimates — accounting for space colonization and digital minds — produce numbers like 10^54 subjective life-years. Against stakes of this magnitude, even a reduction in existential risk by “one millionth of one percentage point” is worth at least a hundred times the value of a million human lives.
Key Concepts
The Maxipok Rule
Bostrom formalizes the heuristic he first sketched in 2002:
Maxipok: Maximize the probability of an “OK outcome,” where an OK outcome is any outcome that avoids existential catastrophe.
Maxipok is a satisficing rule, not an optimizing one. It differs crucially from maximin (“choose the action with the best worst-case outcome”). Since existential risk can never be fully eliminated, maximin would absurdly imply we should “start partying as if there were no tomorrow.” Maxipok instead directs us to focus philanthropic and policy effort on the single intervention with the highest expected value: preventing existential catastrophe.
Improved Classification of Existential Risks
Bostrom refines his 2002 taxonomy into four cleaner categories:
- Human extinction — Premature extinction before reaching technological maturity
- Permanent stagnation — Humanity survives but never reaches technological maturity (subcategories: unrecovered collapse, plateauing, recurrent collapse)
- Flawed realization — Technological maturity is reached but in a dismally and irremediably flawed way (subcategories: unconsummated realization, ephemeral realization)
- Subsequent ruination — A good initial setup at technological maturity is later permanently ruined
This expanded framework highlights that extinction is not the only existential catastrophe. A future with vast computational resources but no conscious experience, or one where an intelligence-explosion produces a powerful but misaligned system that locks in bad values (value-lock-in), would also constitute existential catastrophe even without extinction.
Meta-Level Uncertainty
An important methodological contribution: when estimating the probability of a catastrophe, the probability that our scientific analysis itself is crucially flawed may dominate the first-order risk estimate. If analysis A says risk X has an extremely small probability P(X), the probability that A is wrong may be much larger than P(X). This means that for low-probability, high-consequence risks, most of the real risk resides in our uncertainty about our own risk assessments.
Sustainability as Trajectory, Not State
Bostrom reframes the ideal of sustainability in dynamic terms. Using a rocket analogy: a rocket on the launch pad is in a “sustainable state” but going nowhere; a rocket in flight is unsustainable (burning fuel fast) but on a trajectory toward the most desirable sustainable state (orbit). Humanity’s situation is analogous — we should not seek to freeze ourselves in a sustainable state but rather pursue a sustainable trajectory that minimizes existential catastrophe while moving toward technological maturity.
The “Black Ball” Problem
The paper introduces what would later be called the “vulnerable world hypothesis” (developed fully in a separate paper): as we repeatedly draw from the “urn of possible technological discoveries,” we risk eventually drawing a “black ball” — an easy-to-make technology that causes extreme harm and against which no defense is feasible. This risk is particularly acute without effective global coordination and surveillance.
Multiple Ethical Perspectives Converge
Bostrom demonstrates that the priority of existential risk reduction is robust across multiple ethical frameworks:
- Utilitarian — Astronomical expected value from preventing extinction
- Preference-satisfactionist — An existential catastrophe would frustrate the strong preferences of billions
- Virtue-based / project-based — Robert Adams’ view that we owe loyalty to “the project of the future of humanity”
- Democratic — Most people, upon informed deliberation, would favor existential risk mitigation
- Custodial — We have duties to preserve what our ancestors built and transmit it to descendants
- Theological — Destroying God’s creation would presumably displease the creator
Barriers and Grounds for Optimism
Bostrom catalogs the obstacles: scope insensitivity of moral intuitions, free-rider problems, academic incentives favoring narrow disciplinary research, the reactive bias of institutions, and the risk that resources flow to easier-to-study but less important risks. Yet he also notes grounds for optimism: many key concepts are new; public awareness of global risks is increasing; the long-term trend toward greater political integration continues; and general improvements in rationality and institutional capacity will differentially funnel resources toward the most important causes.
Significance
This paper is one of the foundational texts of longtermism. Its expected-value argument — that reducing existential risk dominates other philanthropic objectives by many orders of magnitude — became the central pillar of the effective-altruism movement’s turn toward existential risk and ai-safety as priority cause areas. The Maxipok rule, the fourfold classification of existential risk, and the “sustainable trajectory” framing have become standard conceptual tools in the field.
The paper also shaped institutional priorities: it provided much of the intellectual rationale for organizations like the future-of-humanity-institute, the global-priorities-institute, and the focus of open-philanthropy on catastrophic and existential risks.
Related Pages
- nick-bostrom
- existential-risk
- longtermism
- effective-altruism
- ai-safety
- population-ethics
- value-lock-in
- intelligence-explosion
- differential-development
- cause-prioritization
- future-of-humanity-institute
- global-priorities-institute
- open-philanthropy
- toby-ord
- derek-parfit
- summary-bostrom-existential-risks
- academic-papers-index
- summary-ea-ai-books
- summary-bostrom-ai-policy