Asymptotic guarantees — SR2025 Agenda Snapshot
One-sentence summary: Prove that if a safety process has enough resources (human data quality, training time, neural network capacity), then in the limit some system specification will be guaranteed. Use complexity theory, game theory, learning theory and other areas to both improve asymptotic guarantees and develop ways of showing convergence.
Theory of Change
Formal verification may be too hard. Make safety cases stronger by modelling their processes and proving that they would work in the limit.
Broad Approach
cognitive
Target Case
pessimistic
Orthodox Problems Addressed
Goals misgeneralize out of distribution, Superintelligence can fool human supervisors
Key People
AISI, Jacob Pfau, Benjamin Hilton, Geoffrey Irving, Simon Marshall, Will Kirby, Martin Soto, David Africa, davidad
Funding
AISI
Estimated FTEs: 5 - 10
Critiques
Self-critique in UK AISI’s Alignment Team: Research Agenda
See Also
debate, guaranteed-safe-ai, control
Outputs in 2025
4 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: asymptotic-guarantees (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Asymptotic guarantees) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- control
- debate
- guaranteed-safe-ai
- agent-foundations
- behavior-alignment-theory
- heuristic-explanations
- high-actuation-spaces
- natural-abstractions
- other-corrigibility
- the-learning-theoretic-agenda
- tiling-agents
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]