Future of Life Institute

The Future of Life Institute (FLI) is a nonprofit organization focused on reducing large-scale catastrophic risks, particularly from advanced AI, nuclear weapons, and biotechnology. Founded in 2014 by Max Tegmark, Skype co-founder Jaan Tallinn, and others, FLI is headquartered in Cambridge, Massachusetts with a Brussels office.

Key Activities

FLI is best known in AI safety circles for:

  • The 2023 Pause Letter: FLI’s open letter “Pause Giant AI Experiments,” calling for a six-month moratorium on training AI systems more powerful than GPT-4. Signed by Elon Musk, Stuart Russell, Yoshua Bengio, and thousands of others. Notably, no major AI lab joined the proposed pause.
  • Asilomar AI Principles (2017): A set of 23 principles for beneficial AI development, signed by leading researchers.
  • Grant-making: FLI funds research on AI safety, nuclear risk reduction, and biotechnology risks. Among the philanthropic backers of LawZero (June 2025) — Bengio’s nonprofit lab building safe-by-design AI.
  • Policy engagement: FLI contributes to AI governance discussions in the EU and globally.

Brussels Office and Key People

FLI Brussels is the operational center for FLI’s EU work and arguably the most prominent international AI x-risk organization with a permanent EU presence. Notable people:

  • Risto Uuk — Head of European Policy and Research; co-author of 2501.04064v1; co-shaper of the EU AI Act’s general-purpose AI / systemic risk provisions; runs the EU AI Act Newsletter (50,000+ subscribers); co-authoring [[the-ai-endgame|The AI Endgame]] (Wiley, forthcoming) with Lode Lauwaert
  • Active engagement with the EU AI Office and the EU AI Act enforcement machinery

LSAIR Conference

FLI co-organizes the LSAIR International Conference on Large-Scale AI Risks with KU Leuven’s Chair Ethics and AI — the flagship academic AI x-risk conference in continental Europe. 2nd edition: 23–24 June 2026, Leuven.

Connection to This Wiki

  • Author Risto Uuk, co-author of the paper “Examining Popular Arguments Against AI Existential Risk” (2501.04064v1), is affiliated with FLI’s Brussels office.
  • The FLI pause letter is analyzed in 2501.04064v1 as a real-world test of the Checkpoints for Intervention Argument: the failure of any lab to join the pause illustrates the coordination problem facing pre-development intervention.
  • FLI Brussels serves as the policy bridge for the Belgian academic x-risk cluster, see summary-ai-xrisk-belgium-europe.

Relationship to Other Organizations

FLI occupies a distinct position in the AI safety ecosystem:

  • More activist/advocacy-focused than technical research orgs like miri or redwood-research
  • More x-risk-focused than governance-only orgs, but more policy-engaged than pure research
  • Connected to the effective-altruism community but not exclusively EA-aligned
  • Often cited alongside future-of-humanity-institute (Oxford) as foundational x-risk organizations