Data attribution — SR2025 Agenda Snapshot

One-sentence summary: Quantifies the influence of individual training data points on a model’s specific behavior or output, allowing researchers to trace model properties (like misalignment, bias, or factual errors) back to their source in the training set.

Theory of Change

By attributing harmful, biased, or unaligned behaviors to specific training examples, researchers can audit proprietary models, debug training data, enable effective data deletion/unlearning

Broad Approach

behavioural

Target Case

average

Orthodox Problems Addressed

Goals misgeneralize out of distribution, Value is fragile and hard to specify

Key People

Roger Grosse, Philipp Alexander Kreer, Jin Hwa Lee, Matthew Smith, Abhilasha Ravichander, Andrew Wang, Jiacheng Liu, Jiaqi Ma, Junwei Deng, Yijun Pan, Daniel Murfet, Jesse Hoogland

Funding

Various academic groups

Estimated FTEs: 30-60

See Also

data-quality-for-alignment

Outputs in 2025

12 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: data-attribution (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.