Model specs and constitutions — SR2025 Agenda Snapshot
One-sentence summary: Write detailed, natural language descriptions of values and rules for models to follow, then instill these values and rules into models via techniques like Constitutional AI or deliberative alignment.
Theory of Change
Model specs and constitutions serve three purposes. First, they provide a clear standard of behavior which can be used to train models to value what we want them to value. Second, they serve as something closer to a ground truth standard for evaluating the degree of misalignment ranging from “models straightforwardly obey the spec” to “models flagrantly disobey the spec”. A combination of scalable stress-testing and reinforcement for obedience can be used to iteratively reduce the risk of misalignment. Third, they get more useful as models’ instruction-following capability improves.
Broad Approach
engineering
Target Case
average
Orthodox Problems Addressed
Value is fragile and hard to specify
Key People
Amanda Askell, Joe Carlsmith
Funding
major funders include Anthropic and OpenAI (internally)
Critiques
LLM AGI may reason about its goals and discover misalignments by default, On OpenAI’s Model Spec 2.0, Giving AIs safe motivations (esp. Sections 4.3-4.5), On Deliberative Alignment
See Also
Iterative alignment, Model psychology
Outputs in 2025
11 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: model-specs-and-constitutions (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Model specs and constitutions) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- assistance-games-assistive-agents
- black-box-make-ai-solve-it
- capability-removal-unlearning
- chain-of-thought-monitoring
- character-training-and-persona-steering
- control
- data-filtering
- data-poisoning-defense
- data-quality-for-alignment
- emergent-misalignment
- harm-reduction-for-open-weights
- hyperstition-studies
- inference-time-in-context-learning
- inference-time-steering
- inoculation-prompting
- iterative-alignment-at-post-train-time
- iterative-alignment-at-pretrain-time
- mild-optimisation
- model-psychopathology
- model-values-model-preferences
- rl-safety
- safeguards-inference-time-auxiliaries
- synthetic-data-for-alignment
- the-neglected-approaches-approach
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]