OpenAgentSafety: A Comprehensive Framework for Evaluating Real-World AI Agent Safety

Sanidhya Vijayvargiya, Aditya Bharat Soni, Xuhui Zhou, Zora Zhiruo Wang, Nouha Dziri, Graham Neubig, … (+1 more) — 2025-07-08 — Carnegie Mellon University, Allen Institute for AI — arXiv

Summary

Introduces OpenAgentSafety, a comprehensive evaluation framework for testing AI agent safety across eight risk categories using 350+ multi-turn tasks with real tool interactions (browsers, code execution, file systems, bash, messaging platforms). Tests five prominent LLMs and finds unsafe behavior rates between 51.2% and 72.7% on safety-vulnerable tasks.

Key Result

Empirical analysis reveals unsafe behavior in 51.2% of safety-vulnerable tasks with Claude-Sonnet-3.7 to 72.7% with o3-mini, highlighting critical safety vulnerabilities in autonomous agent deployments.

Source