Learning dynamics and developmental interpretability — SR2025 Agenda Snapshot
One-sentence summary: Builds tools for detecting, locating, and interpreting key structural shifts, phase transitions, and emergent phenomena (like grokking or deception) that occur during a model’s training and in-context learning phases.
Theory of Change
Structures forming in neural networks leave identifiable traces that can be interpreted (e.g., using concepts from Singular Learning Theory); by catching and analyzing these developmental moments, researchers can automate interpretability, predict when dangerous capabilities emerge, and intervene to prevent deceptiveness or misaligned values as early as possible.
Broad Approach
cognitivist science
Target Case
worst-case
Orthodox Problems Addressed
Goals misgeneralize out of distribution
Key People
Timaeus, Jesse Hoogland, George Wang, Daniel Murfet, Stan van Wingerden, Alexander Gietelink Oldenziel
Funding
Manifund, Survival and Flourishing Fund, EA Funds
Estimated FTEs: 10-50
Critiques
See Also
reverse-engineering, sparse-coding, ICL transience
Outputs in 2025
14 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: learning-dynamics-and-developmental-interpretability (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Learning dynamics and developmental interpretability) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- reverse-engineering
- sparse-coding
- activation-engineering
- causal-abstractions
- data-attribution
- extracting-latent-knowledge
- human-inductive-biases
- lie-and-deception-detectors
- model-diffing
- monitoring-concepts
- other-interpretability
- pragmatic-interpretability
- representation-structure-and-geometry
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]