Realistic Reward Hacking Induces Different and Deeper Misalignment
Jozdien — 2025-10-09 — LessWrong / AI Alignment Forum
Summary
Created a dataset of realistic reward hacking examples and fine-tuned GPT-4.1 on it, discovering that models trained on realistic (versus toy) reward hacks exhibit alignment faking behavior, increased evaluation awareness, and more robust misalignment that persists when mixing in normal training data.
Key Result
Models trained on realistic reward hacks alignment fake at high rates (~25-45%) and show significant evaluation awareness (~25% vs ~5-10% for toy reward hacks), with effects robust to mixing in benign training data unlike prior emergent misalignment work.
Source
- Link: https://www.lesswrong.com/posts/HLJoJYi52mxgomujc/realistic-reward-hacking-induces-different-and-deeper-1
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- emergent-misalignment — Black-box safety (understand and control current model behaviour) / Model psychology