Inference-time: Steering — SR2025 Agenda Snapshot
One-sentence summary: Manipulate an LLM’s internal representations/token probabilities without touching weights.
Theory of Change
“LLMs don’t seem very dangerous and might scale to AGI, things are generally smooth, relevant capabilities are harder than alignment, assume no mesaoptimisers, assume that zero-shot deception is hard, assume a fundamentally humanish ontology is learned, assume no simulated agents, assume that noise in the data means that human preferences are not ruled out, assume that alignment is a superficial feature, assume that tuning for what we want will also get us to avoid what we don’t want. Maybe assume that thoughts are translucent.”
Broad Approach
engineering
Target Case
average
Key People
Taylor Sorensen, Constanza Fierro, Kshitish Ghate, Arthur Vogels
Critiques
Alfour, STACK, Dung, Gölz, Gaikwad, Hubinger
See Also
activation-engineering, character-training-and-persona-steering, safeguards-inference-time-auxiliaries
Outputs in 2025
4 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: inference-time-steering (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Inference-time: Steering) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- activation-engineering
- character-training-and-persona-steering
- safeguards-inference-time-auxiliaries
- assistance-games-assistive-agents
- black-box-make-ai-solve-it
- capability-removal-unlearning
- chain-of-thought-monitoring
- control
- data-filtering
- data-poisoning-defense
- data-quality-for-alignment
- emergent-misalignment
- harm-reduction-for-open-weights
- hyperstition-studies
- inference-time-in-context-learning
- inoculation-prompting
- iterative-alignment-at-post-train-time
- iterative-alignment-at-pretrain-time
- mild-optimisation
- model-psychopathology
- model-specs-and-constitutions
- model-values-model-preferences
- rl-safety
- synthetic-data-for-alignment
- the-neglected-approaches-approach
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]