A sketch of an AI control safety case
Tomek Korbak, Joshua Clymer, Benjamin Hilton, Buck Shlegeris, Geoffrey Irving — 2025-01-28 — Redwood Research — arXiv
Summary
Presents a framework for constructing ‘control safety cases’ - structured arguments that AI models cannot subvert control measures to cause unacceptable outcomes. Demonstrates the approach with a case study of preventing data exfiltration by internally deployed LLM agents using control evaluations with red teams.
Source
- Link: https://arxiv.org/abs/2501.17315
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- control — Black-box safety (understand and control current model behaviour) / Iterative alignment