Assistance games, assistive agents — SR2025 Agenda Snapshot

One-sentence summary: Formalize how AI assistants learn about human preferences given uncertainty and partial observability, and construct environments which better incentivize AIs to learn what we want them to learn.

Theory of Change

Understand what kinds of things can go wrong when humans are directly involved in training a model → build tools that make it easier for a model to learn what humans want it to learn.

Broad Approach

engineering / cognitive

Target Case

varies

Orthodox Problems Addressed

Value is fragile and hard to specify, Humanlike minds/goals are not necessarily safe

Key People

Joar Skalse, Anca Dragan, Caspar Oesterheld, David Krueger, Dylan Hafield-Menell, Stuart Russell

Funding

Future of Life Institute, Coefficient Giving, Survival and Flourishing Fund, Cooperative AI Foundation, Polaris Ventures

Critiques

nice summary of historical problem statements

See Also

guaranteed-safe-ai

Outputs in 2025

5 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: assistance-games-assistive-agents (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.