Data attribution — SR2025 Agenda Snapshot
One-sentence summary: Quantifies the influence of individual training data points on a model’s specific behavior or output, allowing researchers to trace model properties (like misalignment, bias, or factual errors) back to their source in the training set.
Theory of Change
By attributing harmful, biased, or unaligned behaviors to specific training examples, researchers can audit proprietary models, debug training data, enable effective data deletion/unlearning
Broad Approach
behavioural
Target Case
average
Orthodox Problems Addressed
Goals misgeneralize out of distribution, Value is fragile and hard to specify
Key People
Roger Grosse, Philipp Alexander Kreer, Jin Hwa Lee, Matthew Smith, Abhilasha Ravichander, Andrew Wang, Jiacheng Liu, Jiaqi Ma, Junwei Deng, Yijun Pan, Daniel Murfet, Jesse Hoogland
Funding
Various academic groups
Estimated FTEs: 30-60
See Also
Outputs in 2025
12 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: data-attribution (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Data attribution) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- data-quality-for-alignment
- activation-engineering
- causal-abstractions
- extracting-latent-knowledge
- human-inductive-biases
- learning-dynamics-and-developmental-interpretability
- lie-and-deception-detectors
- model-diffing
- monitoring-concepts
- other-interpretability
- pragmatic-interpretability
- representation-structure-and-geometry
- reverse-engineering
- sparse-coding
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]