Other corrigibility — SR2025 Agenda Snapshot
One-sentence summary: Diagnose and communicate obstacles to achieving robustly corrigible behavior; suggest mechanisms, tests, and escalation channels for surfacing and mitigating incorrigible behaviors
Theory of Change
Labs are likely to develop AGI using something analogous to current pipelines. Clarifying why naive instruction-following doesn’t buy robust corrigibility + building strong tripwires/diagnostics for scheming and Goodharting thus reduces risks on the likely default path.
Broad Approach
varies
Target Case
pessimistic
Orthodox Problems Addressed
Corrigibility is anti-natural, Instrumental convergence
Key People
Jeremy Gillen
Funding
Estimated FTEs: 1-10
See Also
Outputs in 2025
9 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: other-corrigibility (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Other corrigibility) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- behavior-alignment-theory
- agent-foundations
- asymptotic-guarantees
- heuristic-explanations
- high-actuation-spaces
- natural-abstractions
- the-learning-theoretic-agenda
- tiling-agents
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]