Large Language Models

Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs

ZZhiyuan HuYYucheng WangYYufei HeJJiaying WuYYilun ZhaoSSee-Kiong NgCCynthia BreazealAAnh Tuan LuuHHae Won ParkBBryan Hooi
arXiv ID
2601.08763
Published
January 13, 2026
Authors
10
Hugging Face Likes
129
Comments
5

Abstract

Reinforcement learning (RL) has become a central paradigm for post-training large language models (LLMs), particularly for complex reasoning tasks, yet it often suffers from exploration collapse: policies prematurely concentrate on a small set of dominant reasoning patterns, improving pass@1 while limiting rollout-level diversity and gains in pass@k. We argue that this failure stems from regularizing local token behavior rather than diversity over sets of solutions. To address this, we propose Uniqueness-Aware Reinforcement Learning, a rollout-level objective that explicitly rewards correct solutions that exhibit rare high-level strategies. Our method uses an LLM-based judge to cluster rollouts for the same problem according to their high-level solution strategies, ignoring superficial variations, and reweights policy advantages inversely with cluster size. As a result, correct but novel strategies receive higher rewards than redundant ones. Across mathematics, physics, and medical reasoning benchmarks, our approach consistently improves pass@k across large sampling budgets and increases the area under the pass@k curve (AUC@K) without sacrificing pass@1, while sustaining exploration and uncovering more diverse solution strategies at scale.

More in Large Language Models

View all
Rewarding the Rare: Uniqueness-Aware RL for Creative Problem Solving in LLMs | Paperchime