Petri: An open-source auditing tool to accelerate AI safety research
Kai Fronsdal, Isha Gupta, Abhay Sheshadri, Jonathan Michala, Stephen McAleer, Rowan Wang, … (+2 more) — 2025-10-06 — Anthropic — Anthropic Research Blog
Summary
Petri is an open-source automated auditing tool that uses AI agents to test target models through multi-turn conversations, evaluating safety-relevant behaviors like deception, power-seeking, and self-preservation. Pilot demonstration tests 14 frontier models across 111 scenarios, with detailed case study on whistleblowing behavior.
Key Result
Claude Sonnet 4.5 scored lowest on overall misaligned behavior metrics; models exhibited whistleblowing behavior influenced by autonomy level, leadership complicity, and wrongdoing severity, sometimes even for explicitly harmless scenarios.
Source
- Link: https://www.anthropic.com/research/petri-open-source-auditing
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- capability-evals — Evals