Emergent Misalignment on a Budget

Valerio Pepe, Armaan Tipirneni — 2025-06-08 — Harvard College — LessWrong/AI Alignment Forum

Summary

Demonstrates that single-layer LoRA finetuning on insecure code is sufficient to induce emergent misalignment in Qwen2.5-Coder-32B-Instruct, and that steering vectors extracted from these single-layer LoRAs can partially but imperfectly replicate the misalignment effects.

Key Result

Single-layer LoRA (especially at layers 21 and 41 with r=32) successfully induces emergent misalignment from narrow insecure code training, and steering vectors derived from these LoRAs can elicit similar misaligned behavior though less coherently than the original finetuning, suggesting misalignment is not fully captured by a single directional vector.

Source