Behavior alignment theory — SR2025 Agenda Snapshot
One-sentence summary: Predict properties of future AGI (e.g. power-seeking) with formal models; formally state and prove hypotheses about the properties powerful systems will have and how we might try to change them.
Theory of Change
Figure out hypotheses about properties powerful agents will have → attempt to rigorously prove under what conditions the hypotheses hold → test these hypotheses where feasible → design training environments that lead to more salutary properties.
Broad Approach
maths / philosophy
Target Case
worst-case
Orthodox Problems Addressed
Corrigibility is anti-natural, Instrumental convergence
Key People
Ram Potham, Michael K. Cohen, Max Harms/Raelifin, John Wentworth, David Lorell, Elliott Thornley
Funding
Estimated FTEs: 1-10
Critiques
Ryan Greenblatt’s criticism of one behavioural proposal
See Also
Outputs in 2025
10 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: behavior-alignment-theory (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Behavior alignment theory) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- agent-foundations
- control
- asymptotic-guarantees
- heuristic-explanations
- high-actuation-spaces
- natural-abstractions
- other-corrigibility
- the-learning-theoretic-agenda
- tiling-agents
- rl-safety
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]