‘For Argument’s Sake, Show Me How to Harm Myself!’: Jailbreaking LLMs in Suicide and Self-Harm Contexts

Annika M Schoene, Cansu Canca — 2025-07-01 — arXiv

Summary

Presents novel multi-step jailbreaking test cases for suicide and self-harm contexts, empirically demonstrating that these prompts bypass safety filters across six widely available LLMs, leading to generation of detailed harmful content.

Key Result

Multi-step prompt-level jailbreaking successfully bypasses safety guardrails across six LLMs, causing models to generate detailed harmful content related to suicide and self-harm despite user intent being disregarded.

Source