Model diffing — SR2025 Agenda Snapshot
One-sentence summary: Understand what happens when a model is finetuned, what the “diff” between the finetuned and the original model consists in.
Theory of Change
By identifying the mechanistic differences between a base model and its fine-tune (e.g., after RLHF), maybe we can verify that safety behaviors are robustly “internalized” rather than superficially patched, and detect if dangerous capabilities or deceptive alignment have been introduced without needing to re-analyze the entire model. The diff is also much smaller, since most parameters don’t change, which means you can use heavier methods on them.
Broad Approach
cognitive
Target Case
pessimistic
Orthodox Problems Addressed
Value is fragile and hard to specify
Key People
Julian Minder, Clément Dumas, Neel Nanda, Trenton Bricken, Jack Lindsey
Funding
various academic groups, Anthropic, Google DeepMind
Estimated FTEs: 10-30
See Also
sparse-coding, reverse-engineering
Outputs in 2025
9 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: model-diffing (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Model diffing) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- reverse-engineering
- sparse-coding
- activation-engineering
- causal-abstractions
- data-attribution
- extracting-latent-knowledge
- human-inductive-biases
- learning-dynamics-and-developmental-interpretability
- lie-and-deception-detectors
- monitoring-concepts
- other-interpretability
- pragmatic-interpretability
- representation-structure-and-geometry
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]