Supervising AIs improving AIs — SR2025 Agenda Snapshot
One-sentence summary: Build formal and empirical frameworks where AIs supervise other (stronger) AI systems via structured interactions; construct monitoring tools which enable scalable tracking of behavioural drift, benchmarks for self-modification, and robustness guarantees
Theory of Change
Early models train ~only on human data while later models also train on early model outputs, which leads to early model problems cascading. Left unchecked this will likely cause problems, so supervision mechanisms are needed to help ensure the AI self-improvement remains legible.
Broad Approach
behavioral
Target Case
pessimistic
Orthodox Problems Addressed
Superintelligence can fool human supervisors, Superintelligence can hack software supervisors
Key People
Roman Engeler, Akbir Khan, Ethan Perez
Funding
Long-Term Future Fund, lab funders
Estimated FTEs: 1-10
Critiques
Automation collapse, Great Models Think Alike and this Undermines AI Oversight
Outputs in 2025
8 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: supervising-ais-improving-ais (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Supervising AIs improving AIs) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- ai-explanations-of-ais
- debate
- llm-introspection-training
- weak-to-strong-generalization
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]