Reinforcement Learning

LSRIF: Logic-Structured Reinforcement Learning for Instruction Following

QQingyu RenQQianyu HeJJingwen ChangJJie ZengJJiaqing LiangYYanghua XiaoHHan XiaZZeye SunFFei Yu
arXiv ID
2601.06431
Published
January 10, 2026
Authors
9
Hugging Face Likes
10
Comments
2

Abstract

Instruction-following is critical for large language models, but real-world instructions often contain logical structures such as sequential dependencies and conditional branching. Existing methods typically construct datasets with parallel constraints and optimize average rewards, ignoring logical dependencies and yielding noisy signals. We propose a logic-structured training framework LSRIF that explicitly models instruction logic. We first construct a dataset LSRInstruct with constraint structures such as parallel, sequential, and conditional types, and then design structure-aware rewarding method LSRIF including average aggregation for parallel structures, failure-penalty propagation for sequential structures, and selective rewards for conditional branches. Experiments show LSRIF brings significant improvements in instruction-following (in-domain and out-of-domain) and general reasoning. Analysis reveals that learning with explicit logic structures brings parameter updates in attention layers and sharpens token-level attention to constraints and logical operators.

Keywords

instruction-followinglogical structuressequential dependenciesconditional branchingLSRInstructLSRIFstructure-aware rewardingparallel structuressequential structuresconditional branchesattention layerstoken-level attention

More in Reinforcement Learning

View all
LSRIF: Logic-Structured Reinforcement Learning for Instruction Following | Paperchime