The Learning-Theoretic Agenda — SR2025 Agenda Snapshot
One-sentence summary: Create a mathematical theory of intelligent agents that encompasses both humans and the AIs we want, one that specifies what it means for two such agents to be aligned; translate between its ontology and ours; produce formal desiderata for a training setup that produces coherent AGIs similar to (our model of) an aligned agent
Theory of Change
Fix formal epistemology to work out how to avoid deep training problems
Broad Approach
cognitive
Target Case
worst-case
Orthodox Problems Addressed
Value is fragile and hard to specify, Goals misgeneralize out of distribution, Humans cannot be first-class parties to a superintelligent value handshake
Key People
Vanessa Kosoy, Diffractor, Gergely Szücs
Funding
Survival and Flourishing Fund, ARIA, UK AISI, Coefficient Giving
Estimated FTEs: 3
Critiques
Outputs in 2025
6 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: the-learning-theoretic-agenda (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = The Learning-Theoretic Agenda) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- agent-foundations
- asymptotic-guarantees
- behavior-alignment-theory
- heuristic-explanations
- high-actuation-spaces
- natural-abstractions
- other-corrigibility
- tiling-agents
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]