Autonomy evals — SR2025 Agenda Snapshot

One-sentence summary: Measure an AI’s ability to act autonomously to complete long-horizon, complex tasks.

Theory of Change

By measuring how long and complex a task an AI can complete (its “time horizon”), we can track capability growth and identify when models gain dangerous autonomous capabilities (like R&D acceleration or replication).

Broad Approach

behaviorist science

Target Case

average

Key People

METR, Thomas Kwa, Ben West, Joel Becker, Beth Barnes, Hjalmar Wijk, Tao Lin, Giulio Starace, Oliver Jaffe, Dane Sherburn, Sanidhya Vijayvargiya, Aditya Bharat Soni, Xuhui Zhou

Funding

The Audacious Project, Open Philanthropy

Estimated FTEs: 10-50

Critiques

Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. The “Length” of “Horizons”

See Also

capability-evals, OpenAI Preparedness, Anthropic RSP

Outputs in 2025

13 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: autonomy-evals (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.