Autonomy evals — SR2025 Agenda Snapshot
One-sentence summary: Measure an AI’s ability to act autonomously to complete long-horizon, complex tasks.
Theory of Change
By measuring how long and complex a task an AI can complete (its “time horizon”), we can track capability growth and identify when models gain dangerous autonomous capabilities (like R&D acceleration or replication).
Broad Approach
behaviorist science
Target Case
average
Key People
METR, Thomas Kwa, Ben West, Joel Becker, Beth Barnes, Hjalmar Wijk, Tao Lin, Giulio Starace, Oliver Jaffe, Dane Sherburn, Sanidhya Vijayvargiya, Aditya Bharat Soni, Xuhui Zhou
Funding
The Audacious Project, Open Philanthropy
Estimated FTEs: 10-50
Critiques
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. The “Length” of “Horizons”
See Also
capability-evals, OpenAI Preparedness, Anthropic RSP
Outputs in 2025
13 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: autonomy-evals (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Autonomy evals) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- capability-evals
- agi-metrics
- ai-deception-evals
- ai-scheming-evals
- other-evals
- sandbagging-evals
- self-replication-evals
- situational-awareness-and-self-awareness-evals
- steganography-evals
- various-redteams
- wmd-evals-weapons-of-mass-destruction
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]