Effective Altruism in the Age of AGI
This summary covers an EA Forum post arguing that effective-altruism needs to fundamentally update its priorities and approach in light of AGI developments, adopting a “third way” that goes beyond both traditional EA cause areas and narrow AI safety.
Core Argument
The transition to AGI society presents both enormous risks and opportunities. EA is uniquely positioned to contribute through its emphasis on evidence-based reasoning, cause prioritization, and willingness to take unconventional positions based on expected value calculations. The post argues EA should embrace the mission of making the transition to a post-AGI society go well — not just preventing catastrophe, but actively shaping positive outcomes.
Neglected Cause Areas Identified
The post highlights five specific cause areas that are currently neglected but will become increasingly important as AGI approaches:
-
AI welfare: The moral status of AI systems themselves — if advanced AI systems are sentient or have morally relevant experiences, this becomes a major ethical concern.
-
AI character: Ensuring that AI systems develop or are given good values, beyond mere alignment with human instructions. This goes beyond technical alignment to questions about what kind of “character” we want AI to have.
-
AI persuasion: The risk that AI systems could be used to manipulate human beliefs and decisions at scale, undermining autonomy and democratic processes.
-
Human power concentration: The risk that AI could enable unprecedented concentration of power in the hands of a few individuals, corporations, or governments — even if the AI itself is “aligned” to those controllers.
-
Space governance: As AI-enabled space colonization becomes feasible, questions about who governs space and how become pressing. This connects to longtermism since space colonization could dramatically expand humanity’s future.
Recommendations for the EA Community
The post makes concrete recommendations across several areas:
- Forum engagement: More discussion and debate about post-AGI scenarios on the EA Forum
- Curriculum updates: EA introductory materials should reflect the centrality of transformative AI
- Conference strategy: EA conferences should dedicate more programming to AGI transition
- Recruiting and talent pipeline: Emphasize existential-risk, transformative technology, and the “most important century” hypothesis (a concept from holden-karnofsky arguing we may be living in the most pivotal period in human history)
Significance for the Library
This post represents an important strategic argument within EA: that the community needs to expand its aperture beyond “prevent AI catastrophe” to “shape the transition to a post-AGI world.” It connects to the broader debate about whether EA should remain cause-neutral or increasingly specialize in AI-related work. The five neglected cause areas it identifies — especially AI welfare and power concentration — are likely to become more prominent in EA discourse.
Related Pages
- effective-altruism
- ai-safety
- longtermism
- existential-risk
- ai-governance
- summary-ea-forum-key-posts
- precipice-revisited
- summary-ea-ai-books
- holden-karnofsky
- ea-forum
- ai-in-context-videos
- ea-content-library-inventory