Defense Against the Dark Prompts: Mitigating Best-of-N Jailbreaking with Prompt Evaluation
Stuart Armstrong, Matija Franklin, Connor Stevens, Rebecca Gorman — 2025-02-01 — arXiv
Summary
Proposes DATDP (Defense Against The Dark Prompts), an inference-time defense that uses an evaluation LLM to repeatedly assess prompts for dangerous behaviors and jailbreaking attempts, demonstrating near-perfect blocking of Best-of-N jailbreaking attacks.
Key Result
DATDP blocked 100% of successful jailbreaks from the original BoN paper (CI: 99.65%-100%) and 99.8% in their replication (CI: 99.28%-99.98%), working effectively even with smaller evaluation models.
Source
- Link: https://arxiv.org/abs/2502.00580
- Listed in the Shallow Review of Technical AI Safety 2025 under 1 agenda(s):
- inference-time-steering — Black-box safety (understand and control current model behaviour) / Iterative alignment