Behavior alignment theory — SR2025 Agenda Snapshot

One-sentence summary: Predict properties of future AGI (e.g. power-seeking) with formal models; formally state and prove hypotheses about the properties powerful systems will have and how we might try to change them.

Theory of Change

Figure out hypotheses about properties powerful agents will have → attempt to rigorously prove under what conditions the hypotheses hold → test these hypotheses where feasible → design training environments that lead to more salutary properties.

Broad Approach

maths / philosophy

Target Case

worst-case

Orthodox Problems Addressed

Corrigibility is anti-natural, Instrumental convergence

Key People

Ram Potham, Michael K. Cohen, Max Harms/Raelifin, John Wentworth, David Lorell, Elliott Thornley

Funding

Estimated FTEs: 1-10

Critiques

Ryan Greenblatt’s criticism of one behavioural proposal

See Also

agent-foundations, control

Outputs in 2025

10 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: behavior-alignment-theory (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.