Large Language Models

DARC: Decoupled Asymmetric Reasoning Curriculum for LLM Evolution

SShengda FanXXuyan YeYYankai Lin
arXiv ID
2601.13761
Published
January 20, 2026
Authors
3
Hugging Face Likes
14
Comments
2

Abstract

Self-play with large language models has emerged as a promising paradigm for achieving self-improving artificial intelligence. However, existing self-play frameworks often suffer from optimization instability, due to (i) non-stationary objectives induced by solver-dependent reward feedback for the Questioner, and (ii) bootstrapping errors from self-generated pseudo-labels used to supervise the Solver. To mitigate these challenges, we introduce DARC (Decoupled Asymmetric Reasoning Curriculum), a two-stage framework that stabilizes the self-evolution process. First, we train the Questioner to synthesize difficulty-calibrated questions, conditioned on explicit difficulty levels and external corpora. Second, we train the Solver with an asymmetric self-distillation mechanism, where a document-augmented teacher generates high-quality pseudo-labels to supervise the student Solver that lacks document access. Empirical results demonstrate that DARC is model-agnostic, yielding an average improvement of 10.9 points across nine reasoning benchmarks and three backbone models. Moreover, DARC consistently outperforms all baselines and approaches the performance of fully supervised models without relying on human annotations.The code is available at https://github.com/RUCBM/DARC.

Keywords

self-playlarge language modelsoptimization instabilitynon-stationary objectivesbootstrapping errorsself-generated pseudo-labelsQuestionerSolverdecoupled asymmetric reasoning curriculumself-distillationdocument-augmented teacherself-evolution processreasoning benchmarksbackbone modelsfully supervised models

More in Large Language Models

View all
DARC: Decoupled Asymmetric Reasoning Curriculum for LLM Evolution | Paperchime