Will MacAskill

William MacAskill is a Scottish philosopher at the University of Oxford and one of the co-founders of the effective-altruism movement. His academic work and public advocacy have made him one of the most recognizable figures in EA, and his book What We Owe the Future brought longtermism to a mainstream audience.

Key Works

What We Owe the Future (2022)

MacAskill’s philosophical case for longtermism — the idea that positively influencing the long-term future is a key moral priority of our time. The book covers:

  • population-ethics — How to weigh the interests of future people
  • Value lock-in — The risk that a narrow set of values becomes permanently dominant, potentially through AI systems aligned to a small group
  • Trajectory changes — How actions today can permanently alter humanity’s long-term trajectory
  • The moral weight of future people — The argument that because the future could contain vastly more people than the present, the long-term future deserves significant moral weight

Academic Papers

MacAskill’s scholarly contributions, published through the global-priorities-institute, include:

  • Normative Uncertainty (PhD Thesis, 2014) — Develops metanormativism: when uncertain which moral theory is correct, maximize expected choice-worthiness across theories. This framework explains why EA spreads resources across cause areas rather than betting everything on one moral theory.
  • Effective Altruism (GPI Paper) — Co-authored philosophical examination of effective-altruism as both a research program and a social movement, providing its academic definition and defense.

EA Forum Contributions

MacAskill has contributed influential posts to the ea-forum, including “Against Maxipok” (with Guive Assadi), which critiques the principle of maximizing the probability of an okay outcome. This nuanced position — that existential-risk matters but is not the only thing that matters — demonstrates the intellectual rigor he brings to internal EA debates.

Role in the EA Movement

MacAskill is one of the intellectual architects of effective altruism. His contributions span:

  • Philosophical foundations — Providing the academic grounding for EA’s approach to cause prioritization and moral uncertainty
  • Public communication — Making complex philosophical arguments accessible to general audiences through What We Owe the Future
  • Institution building — Co-founding the EA movement and contributing to the global-priorities-institute’s research agenda
  • Internal critique — Willingness to challenge positions within EA (as in the Maxipok critique), modeling the intellectual honesty the movement values

Significance for This Wiki

MacAskill provides the philosophical case for why the long-term future matters — and specifically why AI risk deserves priority attention within that framework. His work on normative uncertainty explains EA’s pluralistic approach to cause areas, while What We Owe the Future articulates why value lock-in through AI systems is among the most important risks humanity faces. He is essential for understanding the “why” behind EA’s prioritization of ai-safety.