LessWrong
LessWrong is an online community and forum dedicated to rationality, clear thinking, and increasingly, ai-safety discourse. Founded by eliezer-yudkowsky, it grew out of his blog posts on Overcoming Bias (2006-2009), which were later compiled into the six-volume Rationality: From AI to Zombies (commonly called “The Sequences”).
Origins and Mission
The platform was created to cultivate a community of people committed to thinking more clearly — identifying cognitive biases, updating beliefs based on evidence, and applying rigorous reasoning to important questions. The Sequences provided the intellectual foundation: a systematic examination of how humans think, where thinking goes wrong, and how to think better, with an eye toward understanding artificial intelligence.
LessWrong’s core vocabulary — priors, updating, calibration, motivated reasoning, the map-territory distinction — has become the shared language of the AI safety community.
Role in the AI Safety Ecosystem
LessWrong occupies a distinctive position in the broader AI safety landscape. While the EA Forum (see ea-forum) handles strategic and moral questions about cause prioritization, and the alignment-forum hosts curated technical research, LessWrong serves as the broader intellectual commons where rationality-informed discussion of AI alignment happens at a technical level.
Key contributions hosted on LessWrong include discussions on the difficulty of the alignment problem, critical assessments of field progress, and analyses of nate-soares’s work on key alignment difficulties. The platform’s culture of self-critical evaluation — exemplified by posts with titles like “The Field of AI Alignment: A Postmortem” — sets a norm of epistemic honesty that distinguishes it from more advocacy-oriented spaces.
Cultural Impact
LessWrong spawned or heavily influenced several important institutions:
- miri (Machine Intelligence Research Institute) — Yudkowsky’s AI safety research organization
- The alignment-forum — a curated subset of LessWrong focused on technical alignment research
- The broader rationalist community, which has significantly shaped effective-altruism culture
The community’s emphasis on epistemic hygiene — carefully distinguishing between what one knows and what one merely believes — has been one of its most enduring contributions to public discourse on AI risk.
Significance for This Wiki
LessWrong represents the technical and intellectual core of AI safety discourse. Understanding its culture, norms, and key posts is essential for navigating the AI alignment literature that this wiki covers. It is the birthplace of many concepts now central to ai-alignment thinking, including the orthogonality thesis, instrumental convergence, and the difficulty of specifying human values.
Related Pages
- eliezer-yudkowsky
- alignment-forum
- miri
- ai-alignment
- ai-safety
- effective-altruism
- rationality-ai-to-zombies
- lesswrong-alignment-posts
- concrete-problems-in-ai-safety
- ea-forum
- nate-soares
- rationality
- summary-ea-forum-key-posts