EA Forum — Key Posts on AI Safety and Longtermism

This summary covers a curated collection of eight influential posts and three introductory topic pages from the EA Forum (forum.effectivealtruism.org), representing the key debates within the effective-altruism community about ai-safety, longtermism, and existential-risk.

The Eight Key Posts

1. Which Side of the AI Safety Community Are You In?

A survey of EA Forum views on AI safety that reveals the community is far from monolithic: 13% oppose AGI development, 26% favor pauses, and 21% support regulation. This post is valuable for countering the oversimplified narrative that EA either wants to stop AI or accelerate it — the reality is a diverse spectrum of positions.

2. A Critique of Strong Longtermism

Challenges strong longtermism’s prioritization of future beings over present ones. Argues that vast future populations do not automatically outweigh short-term causes. This is one of the most important internal critiques within EA — it forces longtermists to defend their position rather than treating it as settled. The debate between strong longtermism and neartermism remains live within the community.

3. Against Maxipok: Existential Risk Isn’t Everything

By will-macaskill and Guive Assadi. Critiques the principle of maximizing the probability of an okay outcome (Maxipok) — the idea that reducing existential-risk should be prioritized above all else. Argues this framework risks missing non-catastrophic but still enormous sources of value. This post represents a nuanced position: existential risk matters, but it is not the only thing that matters.

4. My Model of EA and AI Safety

Outlines AI doom scenarios (extinction, value lock-in, S-risks), EA’s cause-neutral role, mitigations, and market failures in AI safety. Promotes the concept of “notkilleveryoneism” — a pragmatic framing that emphasizes the basic goal of not destroying humanity, without requiring agreement on more ambitious visions.

5. Ten AI Safety Projects I’d Like People to Work On

Proposes 10 tractable AI safety projects including security field-building, ai-governance research, evaluations, and lab monitoring. This is one of the more actionable posts in the collection — it bridges the gap between abstract concern about AI risk and concrete things people can do about it.

6. The Precipice Revisited — Toby Ord

toby-ord updates the existential risk landscape since The Precipice, focusing on AI advancements and evolving threats. See precipice-revisited for the detailed summary of this post.

7. Effective Altruism in the Age of AGI

Argues that EA should adopt a “third way” by embracing the mission of making the transition to a post-AGI society go well. See summary-ea-in-age-of-agi for the detailed summary.

8. Is Transformative AI the Biggest Existential Risk?

Poses the question to the EA community on whether AGI exceeds other x-risks like pandemics. This is a key cause prioritization question — the answer determines how much of EA’s resources should flow to AI safety versus biosecurity, nuclear risk, and other cause areas.

Introductory Topic Pages

Longtermism

The EA Forum’s introductory definition of longtermism as a philosophy prioritizing long-term future improvement. A useful starting point for newcomers.

Existential Risk

Defines existential-risk and covers estimates, moral priority arguments, and EA focus areas. Connects the academic work of nick-bostrom and toby-ord to the EA community’s practical priorities.

AI Safety

Introduces ai-safety as reducing AI risks via technical, governance, and other interventions. Provides the EA community’s framing of AI safety as a cause area.

Key Themes Across the Posts

  1. Internal debate is healthy: The EA Forum hosts genuine disagreements — between longtermists and neartermists, between AI-pause advocates and regulation advocates, between Maxipok and more pluralistic approaches. The community values this.

  2. Cause prioritization is central: Many posts are fundamentally about where to allocate attention and resources. The EA framework of comparing cause areas by scale, neglectedness, and tractability structures these debates.

  3. From abstract to concrete: The collection ranges from philosophical arguments (against strong longtermism) to actionable project proposals (ten AI safety projects). This breadth reflects EA’s ambition to bridge theory and practice.

  4. AI is increasingly dominant: While EA historically balanced many cause areas, these posts reflect a community increasingly focused on AI risk as the dominant concern — a shift that itself generates debate (see “A Critique of Strong Longtermism”).

Significance for the Library

The EA Forum is where the strategic and moral debates about AI safety happen at a community level. While lesswrong and the alignment-forum handle the technical object-level questions (see lesswrong-alignment-posts), the EA Forum is where questions like “how much should we prioritize AI vs. other risks?” and “what concrete projects should we fund?” are discussed.