AI Safety & Alignment

Outcome Accuracy is Not Enough: Aligning the Reasoning Process of Reward Models

BBinghai WangYYantao LiuYYuxuan LiuTTianyi TangSShenzhi WangCChang GaoCChujie ZhengYYichang ZhangLLe YuSShixuan LiuTTao GuiQQi ZhangXXuanjing HuangBBowen YuFFei HuangJJunyang Lin
Published
February 4, 2026
Authors
16
Word Count
13,866
Code
Includes code

Enhancing model reasoning through rationale consistency.

Abstract

Generative Reward Models (GenRMs) and LLM-as-a-Judge exhibit deceptive alignment by producing correct judgments for incorrect reasons, as they are trained and evaluated to prioritize Outcome Accuracy, which undermines their ability to generalize during RLHF. We introduce Rationale Consistency, a fine-grained metric that quantifies the alignment between the model's reasoning process and human judgment. Our evaluation of frontier models reveals that rationale consistency effectively discriminates among state-of-the-art models and detects deceptive alignment, while outcome accuracy falls short in both respects. To mitigate this gap, we introduce a hybrid signal that combines rationale consistency with outcome accuracy for GenRM training. Our training method achieves state-of-the-art performance on RM-Bench (87.1%) and JudgeBench (82%), surpassing outcome-only baselines by an average of 5%. Using RM during RLHF, our method effectively improves performance as demonstrated on Arena Hard v2, notably yielding a 7% improvement in creative writing tasks. Further analysis confirms that our method escapes the deceptive alignment trap, effectively reversing the decline in rationale consistency observed in outcome-only training.

Key Takeaways

  • 1

    Rationale consistency ensures robust reasoning in models.

  • 2

    Outcome accuracy alone can mask deceptive alignment.

  • 3

    Hybrid signal improves model training and evaluation.

Limitations

  • Outcome accuracy may not reflect true model capability.

  • Rationale consistency requires complex evaluation frameworks.

Keywords

Generative Reward ModelsLLM-as-a-Judgedeceptive alignmentoutcome accuracyrationale consistencyRLHFRM-BenchJudgeBenchcreative writing tasks

More in AI Safety & Alignment

View all
Outcome Accuracy is Not Enough: Aligning the Reasoning Process of Reward Models | Paperchime