Theory for aligning multiple AIs — SR2025 Agenda Snapshot
One-sentence summary: Use realistic game-theory variants (e.g. evolutionary game theory, computational game theory) or develop alternative game theories to describe/predict the collective and individual behaviours of AI agents in multi-agent scenarios.
Theory of Change
While traditional AGI safety focuses on idealized decision-theory and individual agents, it’s plausible that strategic AI agents will first emerge (or are emerging now) in a complex, multi-AI strategic landscape. We need granular, realistic formal models of AIs’ strategic interactions and collective dynamics to understand this future.
Broad Approach
cognitive
Target Case
mixed
Orthodox Problems Addressed
Goals misgeneralize out of distribution, Superintelligence can fool human supervisors, Superintelligence can hack software supervisors
Key People
Lewis Hammond, Emery Cooper, Allan Chan, Caspar Oesterheld, Vincent Conitzer, Vojta Kovarik, Nathaniel Sauerberg, ACS Research, Jan Kulveit, Richard Ngo, Emmett Shear, Softmax, Full Stack Alignment, AI Objectives Institute, Sahil, TJ, Andrew Critch
Funding
SFF, CAIF, Deepmind, Macroscopic Ventures
Estimated FTEs: 10
See Also
tools-for-aligning-multiple-ais, aligning-what
Outputs in 2025
12 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: theory-for-aligning-multiple-ais (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Theory for aligning multiple AIs) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- aligning-what
- tools-for-aligning-multiple-ais
- aligned-to-who
- aligning-to-context
- aligning-to-the-social-contract
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]