Aligning to context — SR2025 Agenda Snapshot
One-sentence summary: Align AI directly to the role of participant, collaborator, or advisor for our best real human practices and institutions, instead of aligning AI to separately representable goals, rules, or utility functions.
Theory of Change
“Many classical problems in AGI alignment are downstream of a type error about human values.” Operationalizing a correct view of human values - one that treats human values as impossible or impractical to abstract from concrete practices - will unblock value fragility, goal-misgeneralization, instrumental convergence, and pivotal-act specification.
Broad Approach
behavioural
Target Case
mixed
Orthodox Problems Addressed
Value is fragile and hard to specify, Corrigibility is anti-natural, Goals misgeneralize out of distribution, Instrumental convergence, Fair, sane pivotal processes
Key People
Full Stack Alignment, Meaning Alignment Institute, Plurality Institute, Tan Zhi-Xuan, Matija Franklin, Ryan Lowe, Joe Edelman, Oliver Klingefjord
Funding
ARIA, OpenAI, Survival and Flourishing Fund
Estimated FTEs: 5
See Also
Outputs in 2025
8 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: aligning-to-context (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Aligning to context) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- aligned-to-who
- aligning-what
- aligning-to-the-social-contract
- theory-for-aligning-multiple-ais
- tools-for-aligning-multiple-ais
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]