Large Language Models

Spurious Rewards Paradox: Mechanistically Understanding How RLVR Activates Memorization Shortcuts in LLMs

LLecheng YanRRuizhe LiGGuanhua ChenQQing LiJJiahui GengWWenxi LiVVincent WangCChris Lee
arXiv ID
2601.11061
Published
January 16, 2026
Authors
8
Hugging Face Likes
6
Comments
3

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) is highly effective for enhancing LLM reasoning, yet recent evidence shows models like Qwen 2.5 achieve significant gains even with spurious or incorrect rewards. We investigate this phenomenon and identify a "Perplexity Paradox": spurious RLVR triggers a divergence where answer-token perplexity drops while prompt-side coherence degrades, suggesting the model is bypassing reasoning in favor of memorization. Using Path Patching, Logit Lens, JSD analysis, and Neural Differential Equations, we uncover a hidden Anchor-Adapter circuit that facilitates this shortcut. We localize a Functional Anchor in the middle layers (L18-20) that triggers the retrieval of memorized solutions, followed by Structural Adapters in later layers (L21+) that transform representations to accommodate the shortcut signal. Finally, we demonstrate that scaling specific MLP keys within this circuit allows for bidirectional causal steering-artificially amplifying or suppressing contamination-driven performance. Our results provide a mechanistic roadmap for identifying and mitigating data contamination in RLVR-tuned models. Code is available at https://github.com/idwts/How-RLVR-Activates-Memorization-Shortcuts.

Keywords

Reinforcement Learning with Verifiable Rewardsperplexityanswer-token perplexityprompt-side coherencePath PatchingLogit LensJSD analysisNeural Differential EquationsAnchor-Adapter circuitFunctional AnchorStructural AdaptersMLP keyscausal steering

More in Large Language Models

View all
Spurious Rewards Paradox: Mechanistically Understanding How RLVR Activates Memorization Shortcuts in LLMs | Paperchime