The Learning-Theoretic Agenda — SR2025 Agenda Snapshot

One-sentence summary: Create a mathematical theory of intelligent agents that encompasses both humans and the AIs we want, one that specifies what it means for two such agents to be aligned; translate between its ontology and ours; produce formal desiderata for a training setup that produces coherent AGIs similar to (our model of) an aligned agent

Theory of Change

Fix formal epistemology to work out how to avoid deep training problems

Broad Approach

cognitive

Target Case

worst-case

Orthodox Problems Addressed

Value is fragile and hard to specify, Goals misgeneralize out of distribution, Humans cannot be first-class parties to a superintelligent value handshake

Key People

Vanessa Kosoy, Diffractor, Gergely Szücs

Funding

Survival and Flourishing Fund, ARIA, UK AISI, Coefficient Giving

Estimated FTEs: 3

Critiques

Matolcsi

Outputs in 2025

6 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: the-learning-theoretic-agenda (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.