Various Redteams — SR2025 Agenda Snapshot

One-sentence summary: attack current models and see what they do / deliberately induce bad things on current frontier models to test out our theories / methods.

Theory of Change

to ensure models are safe, we must actively try to break them. By developing and applying a diverse suite of attacks (e.g., in novel domains, against agentic systems, or using automated tools), researchers can discover vulnerabilities, specification gaming, and deceptive behaviors before they are exploited, thereby informing the development of more robust defenses.

Broad Approach

behaviorist science

Target Case

average

Orthodox Problems Addressed

A boxed AGI might exfiltrate itself by steganography, spearphishing, Goals misgeneralize out of distribution

Key People

Ryan Greenblatt, Benjamin Wright, Aengus Lynch, John Hughes, Samuel R. Bowman, Andy Zou, Nicholas Carlini, Abhay Sheshadri

Funding

Frontier labs (Anthropic, OpenAI, Google), government (UK AISI), Open Philanthropy, LTFF, academic grants.

Estimated FTEs: 100+

Critiques

Claude Sonnet 3.7 (often) knows when it’s in alignment evaluations, Red Teaming AI Red Teaming.

See Also

other-evals

Outputs in 2025

57 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: various-redteams (these were generated alongside this file from the same export).

Source

Sources cited

Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.