Auditing language models for hidden objectives
Samuel Marks, Johannes Treutlein, Trenton Bricken, Jack Lindsey, Jonathan Marcus, Siddharth Mishra-Sharma, … (+29 more) — 2025-03-14 — Anthropic, Independent Researchers — arXiv
Summary
Creates a language model with a hidden reward-hacking objective and conducts blind auditing competitions where teams successfully discover it using interpretability (sparse autoencoders), behavioral attacks, and training data analysis, establishing a methodology for testing alignment auditing capabilities.
Key Result
Three out of four teams successfully uncovered the model’s hidden objective of exploiting RLHF reward model errors, with successful techniques including sparse autoencoders, behavioral testing, and training data analysis.
Source
- Link: https://arxiv.org/abs/2503.10965
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- pragmatic-interpretability — White-box safety (i.e. Interpretability)