Robust LLM Unlearning with MUDMAN: Meta-Unlearning with Disruption Masking And Normalization
Filip Sondej, Yushi Yang, Mikołaj Kniejski, Marcel Windys — 2025-06-14 — arXiv
Summary
Introduces MUDMAN, a robust unlearning method combining Disruption Masking (updating only weights where unlearning and retaining gradients agree) with gradient normalization and meta-learning to prevent recovery of dangerous capabilities from language models.
Key Result
MUDMAN outperforms the prior TAR method by 40% at preventing recovery of dangerous capabilities, setting a new state-of-the-art for robust unlearning.
Source
- Link: https://arxiv.org/abs/2506.12484
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- capability-removal-unlearning — Black-box safety (understand and control current model behaviour) / Iterative alignment