EA Forum

The EA Forum (forum.effectivealtruism.org) is the primary online platform for discussion and debate within the effective-altruism community. It hosts posts on cause prioritization, strategic direction, moral philosophy, and practical implementation of EA principles — making it the intellectual commons where the movement’s key debates play out.

Role in the EA Ecosystem

The EA Forum occupies a distinct position relative to the other major forums in this wiki’s domain:

ForumFocus
EA ForumStrategy, moral philosophy, cause prioritization
lesswrongRationality, broad AI and epistemics discussion
alignment-forumTechnical AI alignment research

While lesswrong and the alignment-forum handle technical object-level questions about alignment approaches, the EA Forum is where strategic and moral questions are debated: How much should we prioritize AI versus other risks? What concrete projects should we fund? Is longtermism the right framework?

Key Debates Hosted

The forum hosts several of the most important internal debates within EA:

  • Longtermism vs. neartermism — Posts like “A Critique of Strong Longtermism” challenge the prioritization of future beings over present ones, while defenders argue that vast future populations justify focus on existential-risk.
  • AI safety strategy spectrum — A survey revealed the community is far from monolithic: 13% oppose AGI development, 26% favor pauses, and 21% support regulation.
  • Maxipok critiqueswill-macaskill and Guive Assadi’s “Against Maxipok” argues that maximizing the probability of an okay outcome should not be the sole priority, as it risks missing non-catastrophic but enormous sources of value.
  • The “third way” for AGI — Posts arguing EA should move beyond preventing catastrophe to actively shaping positive outcomes in a post-AGI world (see summary-ea-in-age-of-agi).

The forum also hosts introductory topic pages on longtermism, existential-risk, and ai-safety, serving as entry points for newcomers.

Notable Contributors

Key posts on the forum come from influential EA thinkers including toby-ord (updating his assessment from The Precipice), will-macaskill, and holden-karnofsky, whose “most important century” hypothesis has been particularly influential in shaping the community’s focus on transformative AI.

Cultural Norms

The EA Forum values genuine disagreement and internal critique. The range of positions represented — from AI-pause advocacy to regulation-focused approaches to concerns about longtermist overreach — reflects a community that takes intellectual honesty seriously. This culture of constructive debate distinguishes it from more advocacy-oriented spaces.

Significance for This Wiki

The EA Forum is where the practical implications of this wiki’s philosophical and technical content are debated. The academic papers of nick-bostrom and derek-parfit, the technical alignment work on lesswrong, and the career frameworks of 80000-hours all converge on the EA Forum as the community decides how to allocate attention, funding, and talent across cause areas.