Capability removal: unlearning — SR2025 Agenda Snapshot
One-sentence summary: Developing methods to selectively remove specific information, capabilities, or behaviors from a trained model (e.g. without retraining it from scratch). A mixture of black-box and white-box approaches.
Theory of Change
If an AI learns dangerous knowledge (e.g., dual-use capabilities like virology or hacking, or knowledge of their own safety controls) or exhibits undesirable behaviors (e.g., memorizing private data), we can specifically erase this “bad” knowledge post-training, which is much cheaper and faster than retraining, thereby making the model safer. Alternatively, intervene in pre-training, to prevent the model from learning it in the first place (even when data filtering is imperfect). You could imagine also unlearning propensities to power-seeking, deception, sycophancy, or spite.
Broad Approach
cognitive / engineering
Target Case
pessimistic
Orthodox Problems Addressed
Superintelligence can hack software supervisors, A boxed AGI might exfiltrate itself by steganography, spearphishing, Humanlike minds/goals are not necessarily safe
Key People
Rowan Wang, Avery Griffin, Johannes Treutlein, Zico Kolter, Bruce W. Lee, Addie Foote, Alex Infanger, Zesheng Shi, Yucheng Zhou, Jing Li, Timothy Qian, Stephen Casper, Alex Cloud, Peter Henderson, Filip Sondej, Fazl Barez
Funding
Coefficient Giving, MacArthur Foundation, UK AI Safety Institute (AISI), Canadian AI Safety Institute (CAISI), industry labs (e.g., Microsoft Research, Google)
Estimated FTEs: 10-50
Critiques
Existing Large Language Model Unlearning Evaluations Are Inconclusive
See Also
data-filtering, White box safety i.e. Interpretability, various-redteams
Outputs in 2025
18 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: capability-removal-unlearning (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Capability removal: unlearning) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- data-filtering
- various-redteams
- assistance-games-assistive-agents
- black-box-make-ai-solve-it
- chain-of-thought-monitoring
- character-training-and-persona-steering
- control
- data-poisoning-defense
- data-quality-for-alignment
- emergent-misalignment
- harm-reduction-for-open-weights
- hyperstition-studies
- inference-time-in-context-learning
- inference-time-steering
- inoculation-prompting
- iterative-alignment-at-post-train-time
- iterative-alignment-at-pretrain-time
- mild-optimisation
- model-psychopathology
- model-specs-and-constitutions
- model-values-model-preferences
- rl-safety
- safeguards-inference-time-auxiliaries
- synthetic-data-for-alignment
- the-neglected-approaches-approach
- meta
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]