Global Priorities Institute

The Global Priorities Institute (GPI) is an academic research center based at the University of Oxford, dedicated to using the tools of rigorous academic philosophy and economics to determine how to do the most good. It represents the scholarly wing of the effective-altruism movement, providing the philosophical infrastructure that underpins EA cause prioritization.

Mission and Approach

GPI conducts foundational research on global priorities — the question of which problems humanity should focus on and how resources should be allocated across cause areas. This research draws on moral philosophy, decision theory, and economics to address questions that are critical to EA but too abstract or fundamental for policy-focused organizations to tackle.

The institute’s work bridges the gap between pure academic philosophy and the practical decision-making that EA demands. It takes questions like “How should we weigh the interests of future generations?” and “How should we act under moral uncertainty?” and develops rigorous frameworks for answering them.

Key Researchers and Publications

GPI’s research output includes work by several philosophers whose writing is foundational to this wiki:

  • will-macaskill — Co-founder of the EA movement and author of What We Owe the Future, which makes the philosophical case for longtermism. His GPI paper on effective-altruism (co-authored) provides the academic definition and defense of EA as both a research program and a social movement. His PhD thesis on normative uncertainty developed the framework of metanormativism — maximizing expected choice-worthiness across moral theories — which explains why EA spreads resources across cause areas rather than betting on a single moral theory.

  • nick-bostrom — Author of five papers in the academic papers collection, including the foundational work that introduced existential-risk as a formal category (2002) and the argument that existential risk prevention is a global priority (2013). His work on superintelligence policy and AI timelines has shaped both the academic and practical sides of AI safety.

  • derek-parfit — While not directly affiliated with GPI, Parfit’s work on personal identity and population ethics provides philosophical foundations that GPI researchers build upon, particularly for longtermism and population-ethics.

Significance for This Wiki

GPI is where the philosophical arguments underlying EA cause prioritization are developed with academic rigor. The institute’s work on longtermism, existential-risk, normative uncertainty, and population-ethics provides the theoretical foundations that organizations like 80000-hours then translate into practical career advice and problem rankings. Without GPI’s research, EA’s prioritization framework would lack its philosophical grounding.