Inoculation Prompting: Instructing LLMs to misbehave at train-time improves test-time alignment
Nevan Wichers, Aram Ebtekar, Ariana Azarbal, Victor Gillioz, Christine Ye, Emil Ryd, … (+5 more) — 2025-10-27 — arXiv
Summary
Introduces Inoculation Prompting (IP), a training-time technique that prevents learning of undesired behaviors (like reward hacking and sycophancy) by modifying training prompts to explicitly request those behaviors, counterintuitively improving test-time alignment without reducing desired capabilities.
Key Result
Across four settings, IP successfully reduces learning of undesired behaviors during supervised fine-tuning without substantially reducing learning of desired capabilities, with prompts that more strongly elicit undesired behavior prior to fine-tuning being more effective inoculation prompts.
Source
- Link: https://arxiv.org/abs/2510.05024
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- inoculation-prompting — Black-box safety (understand and control current model behaviour) / Iterative alignment