Large Language Models

Believe Your Model: Distribution-Guided Confidence Calibration

XXizhong YangHHaotian ZhangHHuiming WangMMofei Song
Published
March 4, 2026
Authors
4
Word Count
21,240
Code
Includes code

Distribution-guided confidence calibration improves test-time scaling through GMM-based filtering and step-level confidence monitoring.

Abstract

Large Reasoning Models have demonstrated remarkable performance with the advancement of test-time scaling techniques, which enhances prediction accuracy by generating multiple candidate responses and selecting the most reliable answer. While prior work has analyzed that internal model signals like confidence scores can partly indicate response correctness and exhibit a distributional correlation with accuracy, such distributional information has not been fully utilized to guide answer selection. Motivated by this, we propose DistriVoting, which incorporates distributional priors as another signal alongside confidence during voting. Specifically, our method (1) first decomposes the mixed confidence distribution into positive and negative components using Gaussian Mixture Models, (2) then applies a reject filter based on positive/negative samples from them to mitigate overlap between the two distributions. Besides, to further alleviate the overlap from the perspective of distribution itself, we propose SelfStepConf, which uses step-level confidence to dynamically adjust inference process, increasing the separation between the two distributions to improve the reliability of confidences in voting. Experiments across 16 models and 5 benchmarks demonstrate that our method significantly outperforms state-of-the-art approaches.

Key Takeaways

  • 1

    DistriVoting uses Gaussian Mixture Models to decompose confidence distributions into correct and incorrect answer clusters for better voting.

  • 2

    SelfStepConf monitors step-level confidence during inference and triggers self-reflection when confidence declines significantly.

  • 3

    The method achieves consistent improvements across 16 models and 5 benchmarks by explicitly leveraging distributional priors in answer selection.

Limitations

  • The approach assumes confidence distributions can be modeled as two Gaussian components, which may not hold for all model types.

  • Significant overlap between positive and negative distributions persists even after filtering, potentially introducing false positives.

Keywords

Gaussian Mixture Modelsconfidence scoresanswer selectiondistributional priorsreject filterSelfStepConftest-time scaling techniquescandidate responsesvoting

More in Large Language Models

View all
Believe Your Model: Distribution-Guided Confidence Calibration | Paperchime