Model specs and constitutions — SR2025 Agenda Snapshot

One-sentence summary: Write detailed, natural language descriptions of values and rules for models to follow, then instill these values and rules into models via techniques like Constitutional AI or deliberative alignment.

Theory of Change

Model specs and constitutions serve three purposes. First, they provide a clear standard of behavior which can be used to train models to value what we want them to value. Second, they serve as something closer to a ground truth standard for evaluating the degree of misalignment ranging from “models straightforwardly obey the spec” to “models flagrantly disobey the spec”. A combination of scalable stress-testing and reinforcement for obedience can be used to iteratively reduce the risk of misalignment. Third, they get more useful as models’ instruction-following capability improves.

Broad Approach

engineering

Target Case

average

Orthodox Problems Addressed

Value is fragile and hard to specify

Key People

Amanda Askell, Joe Carlsmith

Funding

major funders include Anthropic and OpenAI (internally)

Critiques

LLM AGI may reason about its goals and discover misalignments by default, On OpenAI’s Model Spec 2.0, Giving AIs safe motivations (esp. Sections 4.3-4.5), On Deliberative Alignment

See Also

Iterative alignment, Model psychology

Outputs in 2025

11 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: model-specs-and-constitutions (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.