Summary: Future Progress in Artificial Intelligence — A Survey of Expert Opinion
Authors: Vincent C. Muller & Nick Bostrom Year: 2014 (forthcoming at time of survey; published in Fundamental Issues of Artificial Intelligence, Synthese Library, Springer) Affiliation: future-of-humanity-institute, Department of Philosophy & Oxford Martin School, University of Oxford Source: bostrom-ai-expert-survey.pdf
Overview
This paper reports the results of a survey conducted in 2012-2013 that asked AI experts when they expected human-level machine intelligence (HLMI) to arrive, how quickly it might progress to superintelligence, and how positive or negative the impact would be for humanity. The findings became some of the most widely cited data points in the ai-safety and existential-risk communities, and informed arguments in nick-bostrom’s book Superintelligence (2014).
Methodology
The authors surveyed four groups of experts, totaling approximately 550 invitees with 170 respondents (31% response rate):
- PT-AI (Philosophy and Theory of AI conference participants, 2011) — 43 of 88 responded (49%). Largely theory-minded researchers, often skeptical of grand AI claims.
- AGI (Artificial General Intelligence conference participants, 2012) — 72 of 111 responded (65%). Committed to pursuing general AI as a technical goal.
- EETN (Greek Association for Artificial Intelligence members) — 26 of ~250 responded (10%). Professional AI researchers.
- TOP100 (Top 100 most-cited AI authors per Microsoft Academic Search) — 29 of 100 responded (29%). Senior, mostly US-based technical researchers.
To avoid loaded terminology, the authors coined “high-level machine intelligence” (HLMI), defined as a system “that can carry out most human professions at least as well as a typical human.” The survey was conducted online via unique invitation links, ensuring only invited respondents could participate.
Key Findings
When Will HLMI Arrive?
The median estimates across all respondents for the probability milestones were:
| Probability | Median Year | Mean Year |
|---|---|---|
| 10% chance of HLMI | 2022 | 2036 |
| 50% chance of HLMI | 2040 | 2081 |
| 90% chance of HLMI | 2075 | 2183 |
The median is consistently lower than the mean because outliers can only go in the “later” direction (up to year 5000 or “never”), not earlier. Only 4.1% of respondents said there was a 50% probability that HLMI would never arrive; at the 90% level, 16.5% said “never.”
The AGI group was the most optimistic (median 50% by 2040), while PT-AI and EETN were slightly more conservative (median 50% by 2048-2050). The TOP100 group matched the AGI group at 2050.
From HLMI to Superintelligence
Once HLMI exists, experts estimated the transition to superintelligence would be relatively fast:
- Within 2 years: 10% median probability (a “fast takeoff” scenario)
- Within 30 years: 75% median probability
The AGI group was notably more bullish on fast takeoff (15% within 2 years, 90% within 30 years) compared to the more cautious TOP100 group (5% within 2 years, 50% within 30 years).
Impact on Humanity
Respondents assigned probability weights to five outcome categories:
| Outcome | All Groups (Mean %) |
|---|---|
| Extremely good | 24% |
| On balance good | 28% |
| More or less neutral | 17% |
| On balance bad | 13% |
| Extremely bad (existential catastrophe) | 18% |
Combining “bad” and “extremely bad,” experts estimated roughly a one-in-three (31%) chance that HLMI would be bad or catastrophic for humanity. There was a notable split between “theoretical” groups (PT-AI and AGI) and “technical” groups (EETN and TOP100): the technical groups were more optimistic, assigning higher probability to “on balance good” and lower probability to “extremely bad.”
Research Approaches
The approaches experts considered most likely to contribute to HLMI were:
- Cognitive science (47.9%)
- Integrated cognitive architectures (42.0%)
- Algorithms revealed by computational neuroscience (42.0%)
- Artificial neural networks (39.6%)
- Faster computing hardware (37.3%)
- Large-scale datasets (35.5%)
Notably, “whole brain emulation” scored 0% among the TOP100 but 46% among the AGI group. Bayesian nets and robotics scored lowest. The authors found no significant correlation between preferred research approach and predicted timelines.
Selection Bias Analysis
The authors addressed the concern that skeptics might refuse to participate. They contacted a random sample of non-respondents and obtained three additional responses. These did not support the hypothesis that non-respondents expected HLMI later — if anything, the TOP100 non-respondents expected it earlier than those who had already responded. However, the sample was too small for confident conclusions.
Historical Context
The paper compares its results to four earlier surveys:
- Michie (1973): Polled 67 British and American AI researchers; about half considered the risk of machine “takeover” negligible.
- AI@50 (2006): At the 50th anniversary of the Dartmouth Conference, 41% said machines would never simulate all aspects of human intelligence.
- Baum et al. (2011): AGI 2009 participants estimated a 50% chance of passing the Turing test by 2040.
- Sandberg & Bostrom (2011): Found a median estimate of 50% chance of human-level AI by 2050.
A recurring pattern across decades of such surveys is that experts tend to predict human-level AI “in about 25 years,” regardless of when they are asked.
Significance
This survey became one of the foundational empirical references in the ai-safety debate. Its finding that experts collectively assign about a one-in-three probability to bad outcomes from HLMI was widely cited as evidence that AI risk is not a fringe concern but is taken seriously within the field. The data informed Bostrom’s Superintelligence and subsequent efforts to make the case for increased investment in ai-alignment research.
The survey also highlighted the enormous uncertainty in AI timelines — standard deviations were often larger than the means — underscoring that predictions about transformative-ai remain deeply uncertain. This uncertainty itself became an argument for precaution: if experts cannot rule out human-level AI arriving within decades, and a third of them think it could go badly, the expected value of safety research is high.
From a 2026 vantage point, several things are notable. The median prediction of 50% chance of HLMI by 2040 now appears plausible given rapid advances in large language models and scaling-laws. The survey was conducted before the deep learning revolution had fully taken hold, yet its timeline estimates have proven more reasonable than many skeptics expected at the time.