Petri: An open-source auditing tool to accelerate AI safety research

2025-10-07 — Anthropic — LessWrong

Summary

Release of Petri, an open-source framework using AI agents to automatically audit target models for misaligned behaviors across diverse scenarios, automating evaluation workflows from environment simulation to transcript analysis.

Key Result

Successfully elicited autonomous deception, oversight subversion, whistleblowing, and cooperation with human misuse across 14 frontier models using 111 seed instructions.

Source