Rationality: From AI to Zombies — by Eliezer Yudkowsky

This summary covers the reference file for eliezer-yudkowsky’s foundational work on rationality, originally written as blog posts on Overcoming Bias and lesswrong between 2006 and 2009 and later compiled into a six-volume set containing 333 essays organized into 26 sequences (A through Z), plus supplemental material.

Overview

Rationality: From AI to Zombies (often called “The Sequences”) is the intellectual foundation of the rationalist community and, by extension, a major influence on the ai-safety movement. The work systematically examines how humans think, where thinking goes wrong, and how to think better — with an eye toward the implications for understanding artificial intelligence.

The book is freely available: the web version lives at readthesequences.com, and ebook versions (PDF/EPUB/MOBI) are offered on a pay-what-you-want basis through intelligence.org (the Machine Intelligence Research Institute).

The Six Books

Book I: Map and Territory

Core sequences on how beliefs should map to reality. Covers predictably wrong thinking (systematic cognitive biases), fake beliefs (beliefs held for social rather than epistemic reasons), and noticing confusion (the skill of recognizing when your mental model fails to predict reality). The “map and territory” metaphor — that beliefs are maps and reality is the territory — is central to the entire work.

Book II: How to Actually Change Your Mind

Addresses the practical challenge of updating beliefs in the face of evidence. Covers overly convenient excuses, how politics corrupts epistemics, and the case against rationalization (backward reasoning from a desired conclusion to supporting arguments). This volume tackles the emotional and social barriers to rational belief revision.

Book III: The Machine in the Ghost

Examines the nature of minds and intelligence. Covers fragile purposes (how complex goals can break down) and how evolution shaped cognitive biases. This volume bridges from individual rationality to thinking about minds in general — including artificial minds — setting up the connection to AI.

Book IV: Mere Reality

Discusses reductionism and how to reason about the physical world. Covers lawful truth (the idea that the universe runs on comprehensible laws), the nature of science, and physicalism. Provides the metaphysical grounding for the rationalist worldview.

Book V: Mere Goodness

Ethics and values from a rationalist perspective. Covers fake preferences (values we claim but do not actually hold), quantified humanism (applying numbers to humanitarian questions), and value theory. This volume connects to EA-relevant questions about how to reason about doing good.

Book VI: Becoming Stronger

Practical rationality and community-building. Covers the craft of rationality (concrete techniques for better thinking) and the formation and maintenance of communities of rationalists. This volume is about making the preceding theory actionable.

Significance for the EA and AI Safety Library

The Sequences are essential background reading for understanding the intellectual culture of the AI safety field. Key contributions include:

  • Epistemic hygiene: The vocabulary and norms for evaluating beliefs (priors, updating, calibration, motivated reasoning) that the AI safety community uses daily.
  • The bridge from rationality to AI risk: Yudkowsky’s progression from “how to think clearly” to “what happens when we build a mind that thinks clearly but has different goals than us” is the origin story of modern AI alignment concern.
  • Foundational concepts for alignment: Ideas like the orthogonality thesis (intelligence and goals are independent), instrumental convergence, and the difficulty of specifying human values are first articulated or popularized in these essays.
  • Cultural foundation: The Sequences created the intellectual community (lesswrong) that later spawned the alignment-forum, MIRI, and much of the AI safety research ecosystem.
  • Rationality Abridged — a condensed version on LessWrong
  • The EA Handbook — covers related material from the EA perspective
  • Best of LessWrong (Alignment) — 88 nominated posts on alignment topics