LLM introspection training — SR2025 Agenda Snapshot

One-sentence summary: Train LLMs to the predict the outputs of high-quality whitebox methods, to induce general self-explanation skills that use its own ‘introspective’ access

Theory of Change

Use the resulting LLMs as powerful dimensionality reduction, explaining internals in a distinct way than interpretability methods and CoT. Distilling self-explanation into the model should lead to the skill being scalable, since self-explanation skill advancement will feed off general-intelligence advancement.

Broad Approach

cognitivist science

Target Case

mixed

Orthodox Problems Addressed

Goals misgeneralize out of distribution, Superintelligence can fool human supervisors, Superintelligence can hack software supervisors

Key People

Belinda Z. Li, Zifan Carl Guo, Vincent Huang, Jacob Steinhardt, Jacob Andreas, Jack Lindsey

Funding

Schmidt Sciences, Halcyon Futures, John Schulman, Wojciech Zaremba

Estimated FTEs: 2-20

See Also

Transluce, anthropic

Outputs in 2025

2 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: llm-introspection-training (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.