Agent foundations — SR2025 Agenda Snapshot
One-sentence summary: Develop philosophical clarity and mathematical formalizations of building blocks that might be useful for plans to align strong superintelligence, such as agency, optimization strength, decision theory, abstractions, concepts, etc.
Theory of Change
Rigorously understand optimization processed and agents, and what it means for them to be aligned in a substrate independent way → identify impossibility results and necessary conditions for aligned optimizer systems → use this theoretical understanding to eventually design safe architectures that remain stable and safe under self-reflection
Broad Approach
cognitive
Target Case
worst-case
Orthodox Problems Addressed
Value is fragile and hard to specify, Corrigibility is anti-natural, Goals misgeneralize out of distribution
Key People
Abram Demski, Alex Altair, Sam Eisenstat, Thane Ruthenis, Alfred Harwood, Daniel C, Dalcy K, José Pedro Faustino
See Also
aligning-what, tiling-agents, Dovetail
Outputs in 2025
10 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: agent-foundations (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Agent foundations) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- aligning-what
- tiling-agents
- asymptotic-guarantees
- behavior-alignment-theory
- heuristic-explanations
- high-actuation-spaces
- natural-abstractions
- other-corrigibility
- the-learning-theoretic-agenda
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]