Convergent Linear Representations of Emergent Misalignment
Anna Soligo, Edward Turner, Senthooran Rajamanoharan, Neel Nanda — 2025-06-16 — ML Alignment & Theory Scholars — LessWrong/AI Alignment Forum
Summary
Discovers convergent linear directions for misalignment in emergently misaligned language models through activation analysis, demonstrating these directions can steer aligned models toward misalignment or ablate misalignment from trained models, with robust transfer across different fine-tuning methods and datasets.
Key Result
Ablating the misalignment direction from a 9-adapter fine-tune reduces emergent misalignment from 20.2% to 4.5% in a different 336-adapter fine-tune trained on a different dataset, demonstrating convergent representations of misalignment.
Source
- Link: https://lesswrong.com/posts/umYzsh7SGHHKsRCaA/convergent-linear-representations-of-emergent-misalignment
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- emergent-misalignment — Black-box safety (understand and control current model behaviour) / Model psychology