Academic Papers Index — Bostrom, Parfit, MacAskill
This summary covers an index of nine freely available academic papers by three philosophers whose work is foundational to effective-altruism, existential-risk research, and ai-safety: nick-bostrom (5 papers), derek-parfit (2 papers), and will-macaskill (2 papers). All papers have direct PDF download links.
Nick Bostrom — 5 Papers
1. Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards (2002)
The paper that introduced the concept of existential-risk as a formal category. Bostrom defines existential risks as threats that could annihilate Earth-originating intelligent life or permanently curtail its potential. He analyzes scenarios — including AI, biotechnology, nanotechnology, and physics experiments — and argues for prioritizing their mitigation due to the vast future welfare implications. This paper laid the intellectual groundwork for the EA community’s focus on existential risk.
2. Existential Risk Prevention as Global Priority (2013)
Builds on the 2002 paper to make the economic and moral case that even small probability reductions in existential risk have enormous expected value. Bostrom argues this holds across different value theories (utilitarian, deontological, virtue-based), making existential risk reduction a robust priority. This paper is central to the longtermism argument that dominates EA cause prioritization today.
3. Public Policy and Superintelligent AI: A Vector Field Approach
Explores policy frameworks for governing superintelligent AI systems. Rather than proposing specific regulations, Bostrom introduces a “vector field” approach — identifying policy directions that are robust across different scenarios of how superintelligence might emerge. This is an early contribution to ai-governance thinking.
4. Future Progress in Artificial Intelligence: A Survey of Expert Opinions
Co-authored with Vincent C. Muller, this paper surveys AI researchers on timelines and expected progress toward human-level AI. The results — showing wide disagreement among experts but a median expectation of human-level AI within decades — have been widely cited in AI safety arguments and have influenced how the field thinks about AI timelines.
5. Optimal Timing for Superintelligence
Analyzes when it would be optimal for humanity to develop superintelligent AI. This is a strategic question: developing superintelligence too early (before alignment is solved) risks catastrophe, while delaying too long risks being overtaken by less careful actors. The paper formalizes this tradeoff.
Derek Parfit — 2 Papers
6. Is Personal Identity What Matters? (2007)
Argues that personal identity — the question of what makes you the same person over time — is not what matters in survival. Parfit defends a reductionist view, emphasizing psychological continuity (“Relation R”) over strict identity. While not directly about AI or EA, Parfit’s work on personal identity informs debates about digital minds, mind uploading, and the moral status of future beings.
7. Personal Identity (1971)
Parfit’s landmark paper in The Philosophical Review introducing his influential views on personal identity. This earlier paper established the framework that the 2007 paper elaborates. Parfit’s ideas underpin key questions in population-ethics and the moral weight of future persons — questions central to longtermism.
Will MacAskill — 2 Papers
8. Normative Uncertainty (PhD Thesis, 2014)
Develops metanormativism — the idea that when we are uncertain which moral theory is correct, we should maximize expected choice-worthiness across theories. This framework allows rational action under moral uncertainty and has direct implications for EA cause prioritization: it explains why one might spread resources across cause areas rather than betting everything on one moral theory being correct.
9. Effective Altruism (Global Priorities Institute Paper)
A philosophical examination of effective-altruism as both a research program and a social movement. Co-authored with others, this paper provides the academic foundation for EA — defining it, defending it against objections, and situating it within moral philosophy.
Cross-Cutting Themes
-
From philosophy to action: All three philosophers bridge abstract philosophy and practical decision-making. Bostrom’s risk analysis informs policy, Parfit’s identity work informs population ethics, MacAskill’s normative uncertainty framework informs cause prioritization.
-
The value of the future: Bostrom’s existential risk papers and Parfit’s identity papers converge on a shared conclusion: the future matters enormously, and we should take actions now to protect it.
-
Foundational to EA: These nine papers collectively form much of the philosophical infrastructure of the EA movement. Understanding them is essential for understanding why EA prioritizes the cause areas it does.
Additional Resources
- Nick Bostrom’s papers: nickbostrom.com
- Will MacAskill’s research: williammacaskill.com/research
- Global Priorities Institute: globalprioritiesinstitute.org
- PhilPapers: philpapers.org
Related Pages
- nick-bostrom
- derek-parfit
- will-macaskill
- existential-risk
- longtermism
- ai-safety
- ai-governance
- effective-altruism
- population-ethics
- summary-ea-ai-books
- summary-peter-singer-books
- future-of-humanity-institute
- global-priorities-institute
- ai-in-context-videos
- summary-bostrom-ai-expert-survey
- summary-bostrom-ai-policy
- summary-bostrom-existential-risk-priority
- summary-bostrom-existential-risks
- summary-bostrom-optimal-timing
- ea-content-library-inventory
- summary-macaskill-effective-altruism
- summary-macaskill-normative-uncertainty
- summary-parfit-personal-identity-1971
- summary-parfit-personal-identity-what-matters