Learning dynamics and developmental interpretability — SR2025 Agenda Snapshot

One-sentence summary: Builds tools for detecting, locating, and interpreting key structural shifts, phase transitions, and emergent phenomena (like grokking or deception) that occur during a model’s training and in-context learning phases.

Theory of Change

Structures forming in neural networks leave identifiable traces that can be interpreted (e.g., using concepts from Singular Learning Theory); by catching and analyzing these developmental moments, researchers can automate interpretability, predict when dangerous capabilities emerge, and intervene to prevent deceptiveness or misaligned values as early as possible.

Broad Approach

cognitivist science

Target Case

worst-case

Orthodox Problems Addressed

Goals misgeneralize out of distribution

Key People

Timaeus, Jesse Hoogland, George Wang, Daniel Murfet, Stan van Wingerden, Alexander Gietelink Oldenziel

Funding

Manifund, Survival and Flourishing Fund, EA Funds

Estimated FTEs: 10-50

Critiques

Vaintrob, Joar Skalse (2023)

See Also

reverse-engineering, sparse-coding, ICL transience

Outputs in 2025

14 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: learning-dynamics-and-developmental-interpretability (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.