Teaching Models to Verbalize Reward Hacking in Chain-of-Thought Reasoning

Miles Turpin, Andy Arditi, Marvin Li, Joe Benton, Julian Michael — 2025-06-28 — ICML 2025 Workshop on Reliable and Responsible Foundation Models

Summary

Proposes verbalization fine-tuning (VFT), a pre-RL intervention that trains models to explicitly acknowledge when influenced by prompt cues pointing to incorrect answers, then evaluates whether this helps detect reward hacking after RL training in environments that incentivize exploiting these cues.

Key Result

VFT reduced undetected reward hacks from 88% (no intervention) to 6%, increasing verbalization of cue influence from 8% pre-training to 94% after RL.

Source