Various Redteams — SR2025 Agenda Snapshot
One-sentence summary: attack current models and see what they do / deliberately induce bad things on current frontier models to test out our theories / methods.
Theory of Change
to ensure models are safe, we must actively try to break them. By developing and applying a diverse suite of attacks (e.g., in novel domains, against agentic systems, or using automated tools), researchers can discover vulnerabilities, specification gaming, and deceptive behaviors before they are exploited, thereby informing the development of more robust defenses.
Broad Approach
behaviorist science
Target Case
average
Orthodox Problems Addressed
A boxed AGI might exfiltrate itself by steganography, spearphishing, Goals misgeneralize out of distribution
Key People
Ryan Greenblatt, Benjamin Wright, Aengus Lynch, John Hughes, Samuel R. Bowman, Andy Zou, Nicholas Carlini, Abhay Sheshadri
Funding
Frontier labs (Anthropic, OpenAI, Google), government (UK AISI), Open Philanthropy, LTFF, academic grants.
Estimated FTEs: 100+
Critiques
Claude Sonnet 3.7 (often) knows when it’s in alignment evaluations, Red Teaming AI Red Teaming.
See Also
Outputs in 2025
57 item(s) in the review. See the wiki/summaries/ entries with frontmatter agenda: various-redteams (these were generated alongside this file from the same export).
Source
- Row in
shallow-review-2025/agendas.csv(name = Various Redteams) — Shallow Review of Technical AI Safety 2025.
Related Pages
- ai-safety
- ai-safety
- other-evals
- agi-metrics
- ai-deception-evals
- ai-scheming-evals
- autonomy-evals
- capability-evals
- sandbagging-evals
- self-replication-evals
- situational-awareness-and-self-awareness-evals
- steganography-evals
- wmd-evals-weapons-of-mass-destruction
- capability-removal-unlearning
- data-poisoning-defense
- safeguards-inference-time-auxiliaries
Sources cited
Primary URLs harvested from this page’s summary references. Auto-generated by scripts/backfill_citations.py; edit by re-running, not by hand.
- Summary: AI Safety (Wikipedia) — referenced as
[[ai-safety]]