Asilomar AI Principles
The Asilomar AI Principles are a set of 23 principles for beneficial AI development drafted at the 2017 Asilomar Conference on Beneficial AI, organized by the Future of Life Institute. More than 100 researchers and thought leaders participated; the principles were subsequently signed by thousands of AI/robotics researchers and other professionals — including DeepMind CEO Demis Hassabis and Meta AI head Yann LeCun.
The conference name deliberately echoes the 1975 Asilomar Conference on Recombinant DNA, where biologists set self-imposed safety guidelines for the then-new field of genetic engineering — establishing a precedent that researchers in a powerful new field can self-regulate in advance of state-imposed rules.
Structure
The 23 principles are grouped into three categories:
- Research Issues (1–5): research goal, funding, science-policy link, research culture, race avoidance.
- Ethics and Values (6–18): safety, failure transparency, judicial transparency, responsibility, value alignment, human values, personal privacy, liberty and privacy, shared benefit, shared prosperity, human control, non-subversion, AI arms race avoidance.
- Longer-term Issues (19–23): capability caution, importance, risks, recursive self-improvement, common good.
The Race Avoidance Principle
Principle 18 — “Race Avoidance” — is one of the most-cited:
“Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.”
It is the philosophical seed of much subsequent AI governance work on competitive dynamics, and is referenced in nearly every modern discussion of AI race-to-the-bottom risk (see ai-governance).
Significance
The Asilomar Principles matter for three reasons:
- Norm-setting under uncertainty: established AI safety norms before any major regulator or summit had moved.
- Cross-camp legitimacy: signed by both academic skeptics and capability-frontier researchers (Hassabis, LeCun), giving them broader buy-in than EA-coded statements.
- Template for later commitments: provided language and structure later echoed in OpenAI’s charter, Anthropic’s responsible-scaling-policy, and the Bletchley Declaration.
They have been criticized for being aspirational rather than operational, with no enforcement mechanism — a critique that applies to most pre-2023 AI safety norms.
Connection to This Wiki
- Authored under the auspices of the FLI.
- Race Avoidance principle directly shapes governance arguments in ai-governance and ai-safety.
- Sits in the historical timeline between Concrete Problems in AI Safety (2016) and the DeepMind Safety taxonomy (2018) as a non-technical bridge.
Related Pages
- ai-safety
- ai-governance
- future-of-life-institute
- differential-development
- responsible-scaling-policy
- ai-safety
- concrete-problems-in-ai-safety
- summary-ai-xrisk-belgium-europe