Safeguards (inference-time auxiliaries) — SR2025 Agenda Snapshot

One-sentence summary: Layers of inference-time defenses, such as classifiers, monitors, and rapid-response protocols, to detect and block jailbreaks, prompt injections, and other harmful model behaviors.

Theory of Change

By building a bunch of scalable and hardened things on top of an unsafe model, we can defend against known and unknown attacks, monitor for misuse, and prevent models from causing harm, even if the core model has vulnerabilities.

Broad Approach

engineering

Target Case

average

Orthodox Problems Addressed

Superintelligence can fool human supervisors, A boxed AGI might exfiltrate itself by steganography, spearphishing

Key People

Mrinank Sharma, Meg Tong, Jesse Mu, Alwin Peng, Julian Michael, Henry Sleight, Theodore Sumers, Raj Agarwal, Nathan Bailey, Edoardo Debenedetti, Ilia Shumailov, Tianqi Fan, Sahil Verma, Keegan Hines, Jeff Bilmes

Funding

most of the big labs

Estimated FTEs: 100+

Critiques

Obfuscated Activations Bypass LLM Latent-Space Defenses

See Also

various-redteams, Iterative alignment

Outputs in 2025

6 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: safeguards-inference-time-auxiliaries (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.