Assistance games, assistive agents — SR2025 Agenda Snapshot
One-sentence summary: Formalize how AI assistants learn about human preferences given uncertainty and partial observability, and construct environments which better incentivize AIs to learn what we want them to learn.
Theory of Change
Understand what kinds of things can go wrong when humans are directly involved in training a model → build tools that make it easier for a model to learn what humans want it to learn.
Broad Approach
engineering / cognitive
Target Case
varies
Orthodox Problems Addressed
Value is fragile and hard to specify, Humanlike minds/goals are not necessarily safe
Key People
Joar Skalse, Anca Dragan, Caspar Oesterheld, David Krueger, Dylan Hafield-Menell, Stuart Russell
Funding
Future of Life Institute, Coefficient Giving, Survival and Flourishing Fund, Cooperative AI Foundation, Polaris Ventures
Critiques
nice summary of historical problem statements
See Also
Outputs in 2025
5 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: assistance-games-assistive-agents (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Assistance games, assistive agents) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- guaranteed-safe-ai
- black-box-make-ai-solve-it
- capability-removal-unlearning
- chain-of-thought-monitoring
- character-training-and-persona-steering
- control
- data-filtering
- data-poisoning-defense
- data-quality-for-alignment
- emergent-misalignment
- harm-reduction-for-open-weights
- hyperstition-studies
- inference-time-in-context-learning
- inference-time-steering
- inoculation-prompting
- iterative-alignment-at-post-train-time
- iterative-alignment-at-pretrain-time
- mild-optimisation
- model-psychopathology
- model-specs-and-constitutions
- model-values-model-preferences
- rl-safety
- safeguards-inference-time-auxiliaries
- synthetic-data-for-alignment
- the-neglected-approaches-approach
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]