AI Safety & Alignment

SEMA: Simple yet Effective Learning for Multi-Turn Jailbreak Attacks

MMingqian FengXXiaodong LiuWWeiwei YangJJialin SongXXuekai ZhuCChenliang XuJJianfeng Gao
Published
February 6, 2026
Authors
7
Word Count
17,854
Code
Includes code

Effective multi-turn jailbreak attacks without victim feedback.

Abstract

Multi-turn jailbreaks capture the real threat model for safety-aligned chatbots, where single-turn attacks are merely a special case. Yet existing approaches break under exploration complexity and intent drift. We propose SEMA, a simple yet effective framework that trains a multi-turn attacker without relying on any existing strategies or external data. SEMA comprises two stages. Prefilling self-tuning enables usable rollouts by fine-tuning on non-refusal, well-structured, multi-turn adversarial prompts that are self-generated with a minimal prefix, thereby stabilizing subsequent learning. Reinforcement learning with intent-drift-aware reward trains the attacker to elicit valid multi-turn adversarial prompts while maintaining the same harmful objective. We anchor harmful intent in multi-turn jailbreaks via an intent-drift-aware reward that combines intent alignment, compliance risk, and level of detail. Our open-loop attack regime avoids dependence on victim feedback, unifies single- and multi-turn settings, and reduces exploration complexity. Across multiple datasets, victim models, and jailbreak judges, our method achieves state-of-the-art (SOTA) attack success rates (ASR), outperforming all single-turn baselines, manually scripted and template-driven multi-turn baselines, as well as our SFT (Supervised Fine-Tuning) and DPO (Direct Preference Optimization) variants. For instance, SEMA performs an average 80.1% ASR@1 across three closed-source and open-source victim models on AdvBench, 33.9% over SOTA. The approach is compact, reproducible, and transfers across targets, providing a stronger and more realistic stress test for large language model (LLM) safety and enabling automatic redteaming to expose and localize failure modes. Our code is available at: https://github.com/fmmarkmq/SEMA.

Key Takeaways

  • 1

    SEMA achieves state-of-the-art attack success rates.

  • 2

    Utilizes reinforcement learning to prevent intent drift.

  • 3

    Outperforms existing methods across multiple datasets.

Limitations

  • Performance may vary in real-world, uncontrolled environments.

  • Requires initial setup and tuning for optimal results.

Keywords

multi-turn jailbreaksreinforcement learningintent-drift-aware rewardsupervised fine-tuningdirect preference optimizationattack success ratesadversarial promptsvictim modelsjailbreak judgesopen-loop attack regime

More in AI Safety & Alignment

View all
SEMA: Simple yet Effective Learning for Multi-Turn Jailbreak Attacks | Paperchime