The Precipice Revisited — by Toby Ord

This summary covers toby-ord’s EA Forum post reflecting on his book The Precipice and updating his assessment of the existential-risk landscape in light of developments since the book’s publication in 2020.

Key Themes

1. Existential Security

Ord introduces and develops the concept of existential security — a state where humanity has durably reduced existential risk to acceptable levels. This is the positive counterpart to existential risk: rather than merely surviving each individual threat, humanity would reach a point where its long-term future is reasonably secure. Achieving existential security is framed as one of the great projects of civilization.

2. Updated Risk Landscape

Perceptions of existential risks have evolved significantly since The Precipice was published. The most notable shift is the increased prominence of AI risk. When Ord wrote the book, he assigned AI a ~10% probability of causing an existential catastrophe this century — already the highest of any single risk factor. Subsequent developments in AI capabilities have, if anything, reinforced this estimate. Other risks (pandemics, nuclear war) have also seen updated assessments based on events like COVID-19 and geopolitical tensions.

3. AI as Dominant Risk

AI developments since publication have reinforced the book’s emphasis on AI as the leading existential risk. The rapid progress in large language models, the scaling of AI capabilities, and the growing recognition of alignment challenges have made the AI risk chapter of The Precipice more relevant, not less. Ord’s assessment that unaligned AI is the single largest existential risk facing humanity appears to be holding up.

4. Challenges in Mobilization

Ord discusses the difficulties in mobilizing action on speculative long-term risks. Despite growing awareness of AI risk, translating concern into effective action remains challenging. The problems include: the speculative nature of the risks makes them easy to dismiss; the technical complexity makes them hard to communicate; and the required coordination (among labs, governments, researchers) is difficult to achieve.

5. Path Forward

Ord outlines priorities for the EA and x-risk communities going forward. The emphasis is on continuing to build the intellectual case for existential risk reduction, developing concrete policy proposals, supporting technical ai-safety research, and expanding the community of people who take these risks seriously.

Connection to The Precipice

The Precipice (2020) — summarized in summary-ea-ai-books — established the intellectual framework for thinking about existential risk. This “revisited” post serves as a status update: what has changed, what has been validated, and what remains to be done. The core thesis — that we live at a uniquely dangerous and pivotal time — has been strengthened by events since publication.

Significance for the Library

This post is valuable as a bridge between The Precipice (written before GPT-3 and the current AI boom) and the present moment. It shows how one of the leading thinkers on existential risk has updated his views in response to rapid AI progress, and it reinforces the case for treating AI safety as the highest-priority cause area within effective-altruism.