Large Language Models

Weak-Driven Learning: How Weak Agents make Strong Agents Stronger

ZZehao ChenGGongxun LiTTianxiang AiYYifei LiZZixuan HuangWWang ZhouFFuzhen ZhuangXXianglong LiuJJianxin LiDDeqing WangYYikun Ban
Published
February 9, 2026
Authors
11
Word Count
9,523
Code
Includes code

Weak model checkpoints strengthen strong models by exposing plausible reasoning errors to correct.

Abstract

As post-training optimization becomes central to improving large language models, we observe a persistent saturation bottleneck: once models grow highly confident, further training yields diminishing returns. While existing methods continue to reinforce target predictions, we find that informative supervision signals remain latent in models' own historical weak states. Motivated by this observation, we propose WMSS (Weak Agents Can Make Strong Agents Stronger), a post-training paradigm that leverages weak checkpoints to guide continued optimization. By identifying recoverable learning gaps via entropy dynamics and reinforcing them through compensatory learning, WMSS enables strong agents to improve beyond conventional post-training saturation. Experiments on mathematical reasoning and code generation datasets show that agents trained with our approach achieve effective performance improvements, while incurring zero additional inference cost.

Key Takeaways

  • 1

    Weak model checkpoints can improve strong models by reintroducing uncertainty about plausible wrong answers.

  • 2

    Optimization saturation occurs when gradient signals vanish as models become confident in their predictions.

  • 3

    Weak-driven learning mimics human learning by having strong models analyze and correct weaker model mistakes.

Limitations

  • The script cuts off mid-explanation during the curriculum-enhanced data activation phase description.

  • No experimental results or performance metrics are provided to validate the WMSS framework's effectiveness.

Keywords

post-training optimizationlarge language modelssaturation bottleneckweak checkpointsentropy dynamicscompensatory learninglearning gaps

More in Large Language Models

View all
Weak-Driven Learning: How Weak Agents make Strong Agents Stronger | Paperchime