Monitoring concepts — SR2025 Agenda Snapshot
One-sentence summary: Identifies directions or subspaces in a model’s latent state that correspond to high-level concepts (like refusal, deception, or planning) and uses them to audit models for misalignment, monitor them at runtime, suppress eval awareness, debug why models are failing, etc.
Theory of Change
By mapping internal activations to human-interpretable concepts, we can detect dangerous capabilities or deceptive alignment directly in the mind of the model even if its overt behavior is perfectly safe. Deploy computationally cheap monitors to flag some hidden misalignment in deployed systems.
Broad Approach
cognitive
Target Case
pessimistic
Orthodox Problems Addressed
Value is fragile and hard to specify, Goals misgeneralize out of distribution, A boxed AGI might exfiltrate itself by steganography, spearphishing
Key People
Daniel Beaglehole, Adityanarayanan Radhakrishnan, Enric Boix-Adserà, Tom Wollschläger, Anna Soligo, Jack Lindsey, Brian Christian, Ling Hu, Nicholas Goldowsky-Dill, Neel Nanda
Funding
Coefficient Giving, Anthropic, various academic groups
Estimated FTEs: 50-100
Critiques
Exploring the generalization of LLM truth directions on conversational formats, Understanding (Un)Reliability of Steering Vectors in Language Models
See Also
Pragmatic interp, reverse-engineering, sparse-coding, model-diffing
Outputs in 2025
11 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: monitoring-concepts (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Monitoring concepts) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- model-diffing
- reverse-engineering
- sparse-coding
- activation-engineering
- causal-abstractions
- data-attribution
- extracting-latent-knowledge
- human-inductive-biases
- learning-dynamics-and-developmental-interpretability
- lie-and-deception-detectors
- other-interpretability
- pragmatic-interpretability
- representation-structure-and-geometry
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]