Agent foundations — SR2025 Agenda Snapshot

One-sentence summary: Develop philosophical clarity and mathematical formalizations of building blocks that might be useful for plans to align strong superintelligence, such as agency, optimization strength, decision theory, abstractions, concepts, etc.

Theory of Change

Rigorously understand optimization processed and agents, and what it means for them to be aligned in a substrate independent way → identify impossibility results and necessary conditions for aligned optimizer systems → use this theoretical understanding to eventually design safe architectures that remain stable and safe under self-reflection

Broad Approach

cognitive

Target Case

worst-case

Orthodox Problems Addressed

Value is fragile and hard to specify, Corrigibility is anti-natural, Goals misgeneralize out of distribution

Key People

Abram Demski, Alex Altair, Sam Eisenstat, Thane Ruthenis, Alfred Harwood, Daniel C, Dalcy K, José Pedro Faustino

See Also

aligning-what, tiling-agents, Dovetail

Outputs in 2025

10 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: agent-foundations (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.