Safety Alignment via Constrained Knowledge Unlearning

Zesheng Shi, Yucheng Zhou, Jing Li — 2025-05-24 — arXiv

Summary

Proposes Constrained Knowledge Unlearning (CKU), a novel safety alignment method that identifies and preserves useful knowledge neurons while selectively pruning gradients during unlearning to remove harmful knowledge from LLMs without compromising performance.

Key Result

CKU significantly enhances model safety against jailbreak attacks while maintaining overall performance, offering superior balance between safety and utility compared to existing methods.

Source