Auditing language models for hidden objectives

Samuel Marks, Johannes Treutlein, Trenton Bricken, Jack Lindsey, Jonathan Marcus, Siddharth Mishra-Sharma, … (+29 more) — 2025-03-14 — Anthropic, Independent Researchers — arXiv

Summary

Creates a language model with a hidden reward-hacking objective and conducts blind auditing competitions where teams successfully discover it using interpretability (sparse autoencoders), behavioral attacks, and training data analysis, establishing a methodology for testing alignment auditing capabilities.

Key Result

Three out of four teams successfully uncovered the model’s hidden objective of exploiting RLHF reward model errors, with successful techniques including sparse autoencoders, behavioral testing, and training data analysis.

Source