Pragmatic interpretability — SR2025 Agenda Snapshot
One-sentence summary: Directly tackling concrete, safety-critical problems on the path to AGI by using lightweight interpretability tools (like steering and probing) and empirical feedback from proxy tasks, rather than pursuing complete mechanistic reverse-engineering.
Theory of Change
By applying interpretability skills to concrete problems, researchers can rapidly develop monitoring and control tools (e.g., steering vectors or probes) that have immediate, measurable impact on real-world safety issues like detecting hidden goals or emergent misalignment.
Broad Approach
cognitive
Target Case
mixed
Orthodox Problems Addressed
Superintelligence can fool human supervisors, Goals misgeneralize out of distribution
Key People
Lee Sharkey, Dario Amodei, David Chalmers, Been Kim, Neel Nanda, David D. Baek, Lauren Greenspan, Dmitry Vaintrob, Sam Marks, Jacob Pfau
Funding
Google DeepMind, Anthropic, various academic groups
Estimated FTEs: 30-60
See Also
reverse-engineering, Concept based interpretability
Outputs in 2025
3 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: pragmatic-interpretability (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Pragmatic interpretability) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- reverse-engineering
- activation-engineering
- causal-abstractions
- data-attribution
- extracting-latent-knowledge
- human-inductive-biases
- learning-dynamics-and-developmental-interpretability
- lie-and-deception-detectors
- model-diffing
- monitoring-concepts
- other-interpretability
- representation-structure-and-geometry
- sparse-coding
- emergent-misalignment
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]