Iterative alignment at pretrain-time — SR2025 Agenda Snapshot

One-sentence summary: Guide weights during pretraining.

Theory of Change

“LLMs don’t seem very dangerous and might scale to AGI, things are generally smooth, relevant capabilities are harder than alignment, assume no mesaoptimisers, assume that zero-shot deception is hard, assume a fundamentally humanish ontology is learned, assume no simulated agents, assume that noise in the data means that human preferences are not ruled out, assume that alignment is a superficial feature, assume that tuning for what we want will also get us to avoid what we don’t want. Maybe assume that thoughts are translucent.”

Broad Approach

engineering

Target Case

average

Key People

Jan Leike, Stuart Armstrong, Cyrus Cousins, Oliver Daniels

Funding

most of the industry

Critiques

Bellot, STACK, Dung, Gaikwad, Hubinger

See Also

prosaic alignment, incrementalism, alignment-by-default, Korbak 2023

Outputs in 2025

2 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: iterative-alignment-at-pretrain-time (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.