Future of Humanity Institute
The Future of Humanity Institute (FHI) was a multidisciplinary research center at the University of Oxford focused on existential-risk, the long-term future of humanity, and the governance of transformative technologies. Founded and directed by nick-bostrom, FHI was one of the earliest and most influential institutions dedicated to studying risks from advanced AI and other emerging technologies. The institute closed in 2024 after operating for nearly two decades.
Research Contributions
FHI produced foundational work across several domains:
- Existential risk as a formal category — Bostrom’s work at FHI introduced and formalized the concept of existential-risk, defining it as threats that could annihilate Earth-originating intelligent life or permanently curtail its potential. This framework became the intellectual foundation for the effective-altruism community’s focus on existential risk.
- AI governance — FHI contributed early and influential work on ai-governance, including Bostrom’s “vector field” approach to policy for superintelligent AI — identifying policy directions robust across different scenarios of how superintelligence might emerge.
- AI timelines and expert surveys — Bostrom’s survey of AI researchers on timelines to human-level AI (co-authored with Vincent Muller) was widely cited and shaped how the field thinks about forecasting AI progress.
- Optimal timing for superintelligence — FHI research formalized the tradeoff between developing superintelligence too early (before ai-alignment is solved) and too late (risking being overtaken by less careful actors).
Key People
- nick-bostrom — Founder and director; author of Superintelligence (2014), which brought AI risk to mainstream attention.
- ben-garfinkel — Research fellow who provided constructive critique of classic AI risk arguments while supporting expanded safety work.
- toby-ord — Senior research fellow; author of The Precipice (2020), which surveyed the full landscape of existential risks.
Legacy
FHI’s closure in 2024 marked the end of an era, but its intellectual legacy pervades the AI safety field. The institute trained and incubated researchers who went on to lead safety work at openai, anthropic, deepmind, and other organizations. Its research on existential risk, AI governance, and the long-term future continues to define the frameworks used by effective-altruism and the broader AI safety community.
The global-priorities-institute, also at Oxford, carries forward some of FHI’s research agenda on prioritizing between existential risks.
Related Pages
- nick-bostrom
- ben-garfinkel
- toby-ord
- existential-risk
- ai-governance
- ai-safety
- longtermism
- effective-altruism
- global-priorities-institute
- future-of-life-institute
- academic-papers-index
- 80k-podcast-ben-garfinkel-ai-risk
- ai-alignment
- anthropic
- deepmind
- openai
- ai-risk-arguments
- rob-wiblin
- summary-bostrom-ai-expert-survey
- summary-bostrom-ai-policy
- summary-bostrom-existential-risk-priority
- summary-bostrom-optimal-timing