Aligned to who? — SR2025 Agenda Snapshot
One-sentence summary: Technical protocols for taking seriously the plurality of human values, cultures, and communities when aligning AI to “humanity”
Theory of Change
use democratic/pluralist/context-sensitive principles to guide AI development, alignment, and deployment somehow. Doing it as an afterthought in post-training or the spec isn’t good enough. Continuously shape AI’s social and technical feedback loop on the road to AGI
Broad Approach
behavioral
Target Case
average
Orthodox Problems Addressed
Value is fragile and hard to specify, Fair, sane pivotal processes
Key People
Joel Z. Leibo, Divya Siddarth, Séb Krier, Luke Thorburn, Seth Lazar, AI Objectives Institute, The Collective Intelligence Project, Vincent Conitzer
Funding
Future of Life Institute, Survival and Flourishing Fund, Deepmind, CAIF
Estimated FTEs: 5 - 15
See Also
aligning-what, aligning-to-context
Outputs in 2025
9 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: aligned-to-who (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Aligned to who?) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- aligning-to-context
- aligning-what
- aligning-to-the-social-contract
- theory-for-aligning-multiple-ais
- tools-for-aligning-multiple-ais
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]