Aligning to the social contract — SR2025 Agenda Snapshot
One-sentence summary: Generate AIs’ operational values from ‘social contract’-style ideal civic deliberation formalisms and their consequent rulesets for civic actors
Theory of Change
Formalize and apply the liberal tradition’s project of defining civic principles separable from the substantive good, aligning our AIs to civic principles that bypass fragile utility-learning and intractable utility-calculation
Broad Approach
cognitive
Target Case
mixed
Orthodox Problems Addressed
Value is fragile and hard to specify, Goals misgeneralize out of distribution, Instrumental convergence, Humanlike minds/goals are not necessarily safe, Fair, sane pivotal processes
Key People
Gillian Hadfield, Tan Zhi-Xuan, Sydney Levine, Matija Franklin, Joshua B. Tenenbaum
Funding
Deepmind, Macroscopic Ventures
Estimated FTEs: 5 - 10
See Also
aligning-to-context, aligning-what
Outputs in 2025
8 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: aligning-to-the-social-contract (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Aligning to the social contract) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- aligning-to-context
- aligning-what
- aligned-to-who
- theory-for-aligning-multiple-ais
- tools-for-aligning-multiple-ais
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]