Selective Generalization: Improving Capabilities While Maintaining Alignment
Ariana Azarbal, Matthew A. Clarke, Jorio Cocola, Cailley Factor, Alex Cloud — 2025-07-16 — SPAR — LessWrong/AI Alignment Forum
Summary
Benchmarks seven methods to prevent emergent misalignment and misgeneralization when training on capability-improving data with limited alignment data, testing on emergent misalignment from medical advice and a novel sycophancy model organism from math training.
Key Result
Simple KL Divergence penalty on alignment data outperforms more sophisticated methods at maintaining alignment while improving capabilities, though a consistent capability-alignment tradeoff persists across all methods.
Source
- Link: https://lesswrong.com/posts/ZXxY2tccLapdjLbKm/selective-generalization-improving-capabilities-while
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- emergent-misalignment — Black-box safety (understand and control current model behaviour) / Model psychology