Pragmatic interpretability — SR2025 Agenda Snapshot

One-sentence summary: Directly tackling concrete, safety-critical problems on the path to AGI by using lightweight interpretability tools (like steering and probing) and empirical feedback from proxy tasks, rather than pursuing complete mechanistic reverse-engineering.

Theory of Change

By applying interpretability skills to concrete problems, researchers can rapidly develop monitoring and control tools (e.g., steering vectors or probes) that have immediate, measurable impact on real-world safety issues like detecting hidden goals or emergent misalignment.

Broad Approach

cognitive

Target Case

mixed

Orthodox Problems Addressed

Superintelligence can fool human supervisors, Goals misgeneralize out of distribution

Key People

Lee Sharkey, Dario Amodei, David Chalmers, Been Kim, Neel Nanda, David D. Baek, Lauren Greenspan, Dmitry Vaintrob, Sam Marks, Jacob Pfau

Funding

Google DeepMind, Anthropic, various academic groups

Estimated FTEs: 30-60

See Also

reverse-engineering, Concept based interpretability

Outputs in 2025

3 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: pragmatic-interpretability (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.