How to evaluate control measures for LLM agents? A trajectory from today to superintelligence
Tomek Korbak, Mikita Balesni, Buck Shlegeris, Geoffrey Irving — 2025-04-07 — Redwood Research, Google DeepMind — arXiv
Summary
Proposes a systematic framework for adapting control evaluation procedures to advancing AI capabilities, defining five AI Control Levels (ACLs) with corresponding evaluation rules, control measures, and safety cases for each capability profile from current systems to superintelligence.
Source
- Link: https://arxiv.org/abs/2504.05259
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- control — Black-box safety (understand and control current model behaviour) / Iterative alignment