AI Safety Atlas Ch.1 — Appendix: Expert Surveys
Source: Appendix: Expert Surveys | ai-safety-atlas.com/chapters/v1/capabilities/appendix-expert-surveys/
This appendix collects expert surveys and direct quotes documenting growing concern among leading AI researchers about existential risk and shortened timelines.
Survey Findings
Per AI Impacts (2022), “Expected time to human-level performance dropped 1–5 decades since the 2022 survey.” The 2024 expert survey median for human-level machine intelligence (HLMI) is 2049.
Pattern noted: the community has consistently underestimated progress speed and adjusted predictions downward, especially after 2023. Some tasks (essay writing, speech transcription) are arguably already automated through ChatGPT and Whisper, though researchers may not fully recognize it. Counterintuitively, forecasts predict “AI researcher” capabilities arriving after general HLMI across all tasks.
This complements the wiki’s existing Müller & Bostrom 2014 survey (50% HLMI by 2040, one-in-three chance of bad outcomes).
Quotes — AI Experts
The appendix is essentially a citable concern-list:
- Geoffrey Hinton (geoffrey-hinton, Turing Award): “The research question is: how do you prevent them from ever wanting to take control?”
- Yoshua Bengio (yoshua-bengio, Turing Award): “Rogue AI may be dangerous for the whole of humanity. Banning powerful systems beyond GPT-4 with autonomy would be sensible.”
- Stuart Russell (stuart-russell): “If we pursue our current approach, we will eventually lose control.”
- Demis Hassabis (DeepMind CEO): “We must take AI risks as seriously as climate change.”
- Dario Amodei (Anthropic CEO): “If an agentic model wanted to wreak havoc, we have basically no ability to stop it.”
- Mustafa Suleyman (Microsoft AI CEO): “We may need to consider pausing within the next five years.”
- Ilya Sutskever (former OpenAI Chief Scientist): “AGIs operating autonomously will likely view humans like we treat animals.”
- Shane Legg: “AI is my number 1 risk for this century. The lack of concrete safety plans worries me.”
- Jan Leike (jan-leike, former Superalignment co-lead): “Safety culture has taken a backseat to products.”
- Sam Altman (OpenAI CEO): supports international oversight of high-power compute clusters analogous to weapons inspectors.
- Greg Brockman (former OpenAI CTO): “The post-AGI world will differ from today’s more than today differs from the 1500s.”
- Jaan Tallinn (Skype co-founder): “No AI lab researcher says training risks are below 1% extinction probability.”
Quotes — Academics
- I.J. Good: an ultraintelligent machine achieving an intelligence explosion would be “the last invention man need ever make.” — the foundational intelligence-explosion quote.
- Alan Turing: “Machines would eventually outstrip human powers and take control.”
- Stephen Hawking: “Full AI development could spell humanity’s end through rapid self-redesign.”
- Eliezer Yudkowsky (eliezer-yudkowsky): “Superintelligent uncaring entities will develop strategies to kill humanity quickly.”
Quotes — Tech Entrepreneurs
- Elon Musk: “Digital superintelligence is humanity’s biggest existential crisis. AI is far more dangerous than nuclear weapons.”
- Bill Gates: “Superintelligent AIs may decide humans are threats or simply stop caring about us.”
Joint Declarations
The Bletchley Declaration (2023, 28 nations + EU): “Substantial risks arise from potential misuse or control failures. Catastrophic harm — deliberate or unintentional — is possible.”
Use of This Appendix
This is the textbook’s “by their words, ye shall know them” appendix — a citation-ready collection for arguments-from-authority in advocacy or curriculum contexts. For wiki purposes, it provides:
- Concrete quotes attaching to existing entity pages (geoffrey-hinton, yoshua-bengio, stuart-russell, jan-leike, eliezer-yudkowsky) — useful when an entity page wants a single representative quote.
- A timeline anchor (2024 median HLMI = 2049) to triangulate against summary-bostrom-ai-expert-survey (2014: 50% by 2040), situational-awareness (Aschenbrenner: 2027), and ai-2027 (March 2027 superhuman coder).
Connection to Wiki
This appendix offers no new safety arguments — it is evidence of consensus. As such it slots into:
- ai-risk-arguments — supporting “growing expert concern” framing
- ai-safety — the field overview
- The biographic entity pages above
- international-ai-safety-report — Bengio’s “Bletchley Declaration” leadership