Introducing Anthropic’s Safeguards Research Team
2025-01-01 — Anthropic — Anthropic Alignment Science Blog
Summary
Announcement of Anthropic’s new Safeguards Research Team, outlining their research agenda focused on jailbreak robustness, automated red teaming, monitoring techniques for misuse and misalignment, rapid response protocols, and safety cases.
Source
- Link: https://alignment.anthropic.com/2025/introducing-safeguards-research-team/index.html
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- safeguards-inference-time-auxiliaries — Black-box safety (understand and control current model behaviour) / Iterative alignment