Safeguards (inference-time auxiliaries) — SR2025 Agenda Snapshot
One-sentence summary: Layers of inference-time defenses, such as classifiers, monitors, and rapid-response protocols, to detect and block jailbreaks, prompt injections, and other harmful model behaviors.
Theory of Change
By building a bunch of scalable and hardened things on top of an unsafe model, we can defend against known and unknown attacks, monitor for misuse, and prevent models from causing harm, even if the core model has vulnerabilities.
Broad Approach
engineering
Target Case
average
Orthodox Problems Addressed
Superintelligence can fool human supervisors, A boxed AGI might exfiltrate itself by steganography, spearphishing
Key People
Mrinank Sharma, Meg Tong, Jesse Mu, Alwin Peng, Julian Michael, Henry Sleight, Theodore Sumers, Raj Agarwal, Nathan Bailey, Edoardo Debenedetti, Ilia Shumailov, Tianqi Fan, Sahil Verma, Keegan Hines, Jeff Bilmes
Funding
most of the big labs
Estimated FTEs: 100+
Critiques
Obfuscated Activations Bypass LLM Latent-Space Defenses
See Also
various-redteams, Iterative alignment
Outputs in 2025
6 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: safeguards-inference-time-auxiliaries (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Safeguards (inference-time auxiliaries)) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- various-redteams
- assistance-games-assistive-agents
- black-box-make-ai-solve-it
- capability-removal-unlearning
- chain-of-thought-monitoring
- character-training-and-persona-steering
- control
- data-filtering
- data-poisoning-defense
- data-quality-for-alignment
- emergent-misalignment
- harm-reduction-for-open-weights
- hyperstition-studies
- inference-time-in-context-learning
- inference-time-steering
- inoculation-prompting
- iterative-alignment-at-post-train-time
- iterative-alignment-at-pretrain-time
- mild-optimisation
- model-psychopathology
- model-specs-and-constitutions
- model-values-model-preferences
- rl-safety
- synthetic-data-for-alignment
- the-neglected-approaches-approach
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]