Robust LLM Unlearning with MUDMAN: Meta-Unlearning with Disruption Masking And Normalization

Filip Sondej, Yushi Yang, Mikołaj Kniejski, Marcel Windys — 2025-06-14 — arXiv

Summary

Introduces MUDMAN, a robust unlearning method combining Disruption Masking (updating only weights where unlearning and retaining gradients agree) with gradient normalization and meta-learning to prevent recovery of dangerous capabilities from language models.

Key Result

MUDMAN outperforms the prior TAR method by 40% at preventing recovery of dangerous capabilities, setting a new state-of-the-art for robust unlearning.

Source