AI Safety Atlas Ch.1 — Appendix: Takeoff
Source: Appendix: Takeoff | ai-safety-atlas.com/chapters/v1/capabilities/appendix-takeoff/ | Authors: Markov Grey & Charbel-Raphaël Ségerie | 9 min
The main takeoff subchapter introduced four dimensions and focused on Speed. This appendix covers the remaining three — Continuity, Homogeneity, Polarity — in detail. Together they form the takeoff-dynamics framework.
Continuity: Smooth or Sudden Jumps?
Continuity is independent from speed. “A rapid takeoff could still be continuous if following identifiable patterns, while a slow one could be discontinuous if involving surprising breakthroughs.” The chapter calls continuity “a measure of ‘surprise’” compared to speed measuring “how quickly AI becomes superintelligent.”
Continuous Takeoff
AI capabilities follow smooth, predictable trends — current LLMs exemplify this. Each successive version improves predictably at coding/math along scaling-laws curves. “Continuous takeoff suggests current scaling law trends might extend to transformative systems, providing advance warning and preparation time for safety measures.” (Critical caveat: “comparatively easier does not mean ‘easy’.“)
Discontinuous Takeoff
Sudden capability jumps breaking previous patterns. Mechanisms could include:
- Fundamentally new training approaches dramatically more efficient than current
- Tipping points where quantitative scale produces qualitative capability changes
- AI systems discovering self-improvements leading to unexpected jumps
Historical examples: nuclear weapons were discontinuous; computer processing improvements were continuous. “Technological discontinuities have historically been rare” — a soft argument for continuous takeoff.
Homogeneity: Similar or Diverse Systems?
Will advanced AI systems be variations of one architecture, or genuinely diverse?
Homogeneous Takeoff
Most advanced systems share fundamental architecture. Today’s LLMs already hint at this: most use transformers trained on similar data. Few organizations can train base models from scratch; others build on top.
Mixed safety implications: solving alignment for one system could solve it for similar systems — but fundamental flaws in common architecture would simultaneously affect all systems. The agricultural-monoculture analogy: manageable but vulnerable to shared weaknesses.
Heterogeneous Takeoff
Significant architectural diversity. Different organizations develop fundamentally different approaches with distinct strengths, weaknesses, behaviors. Some specialized, some general-purpose; some transparent, some opaque; some aligned, some not.
Competitive dynamics could exacerbate diversity — labs racing toward breakthroughs without sharing methodology. In siloed national-AI scenarios, “different methodologies, behaviors, functionalities, and safety levels” emerge.
Trade-off: heterogeneity requires safety measures working across diverse systems and limits transferable lessons. But it protects against systemic single-point failures — if one approach proves dangerous, alternatives remain.
Polarity: Concentrated or Distributed Power?
Will one AI system or organization gain decisive advantage, or will multiple actors advance in parallel?
Unipolar Takeoff
One actor pulls decisively ahead through:
- A single breakthrough
- Compute-acquisition advantages building positive feedback loops
- An AI system better at improving itself than helping competitors build alternatives
“Once sufficiently ahead, catching up becomes practically impossible.” See ai-takeover-scenarios and stable-totalitarianism.
Concentrates risk in single points of failure but simplifies coordination (fewer actors). One system’s alignment becomes crucial — “it might unilaterally shape the long-term future.” Connects directly to value-lock-in.
Multipolar Takeoff
Multiple actors with comparable advanced systems. Today’s landscape shows multipolar elements — multiple labs train large language models, techniques spread rapidly. Could continue with multiple groups maintaining capability parity through transformative AI.
Different challenges: multiple systems acting conflictingly, competing for resources, racing to deploy and compromising safety. Misuse risks still present from human actors in either scenario.
How the Dimensions Interact
The appendix flags one cross-dimension dependency explicitly: fast takeoff favors unipolarity, while slow takeoff tends multipolar. Reasoning: rapid progress advantages whoever moves fastest; slower progress lets multiple actors catch up.
Three factors shape polarity:
- Speed of development — rapid favors unipolarity, slow tends multipolar
- Collaboration vs. competition — open communities support multipolarity, secretive/competitive environments push unipolarity
- Regulatory/economic dynamics — anti-concentration policies foster multipolar takeoff
Why This Appendix Matters
This is the strategic-stakes layer beyond mere capability progression. The same Speed argument from the main subchapter has different implications under different (Continuity × Homogeneity × Polarity) configurations:
- Slow + Continuous + Homogeneous + Multipolar → today’s status quo extended; classic alignment-research strategies viable
- Fast + Discontinuous + Homogeneous + Unipolar → Yudkowsky/MIRI worst case; one lab’s alignment outcome decides everything
- Slow + Continuous + Heterogeneous + Multipolar → coordination failures, race-to-bottom safety pressures, ai-governance is the bottleneck
- Fast + Continuous + Homogeneous + Multipolar → AI 2027-like multilateral race scenario
These permutations clarify that “the alignment problem” looks different depending on which world you’re in. Useful for ai-control vs. superalignment vs. ai-governance strategic debate.
Connection to Wiki
This appendix completes the case for the new takeoff-dynamics concept page. Specific links:
- ai-takeover-scenarios — unipolar AI takeoff is the precondition for several pathways listed there
- stable-totalitarianism — unipolar + homogeneous + fast = the structural conditions for AI-enabled permanent authoritarianism
- value-lock-in — directly enabled by unipolar outcomes
- ai-governance — multipolar coordination failure is the dominant governance concern
- ai-2027 — the AI 2027 scenario is essentially a Fast/Continuous/Homogeneous/Multipolar trajectory with a unipolar phase transition
- situational-awareness — Aschenbrenner argues for fast/continuous/homogeneous/unipolar (US wins)