Large Language Models

Uncertainty-Aware Gradient Signal-to-Noise Data Selection for Instruction Tuning

ZZhihang YuanCChengyu YueLLong HuangLLitu OuLLei Shi
arXiv ID
2601.13697
Published
January 20, 2026
Authors
5
Hugging Face Likes
3
Comments
2

Abstract

Instruction tuning is a standard paradigm for adapting large language models (LLMs), but modern instruction datasets are large, noisy, and redundant, making full-data fine-tuning costly and often unnecessary. Existing data selection methods either build expensive gradient datastores or assign static scores from a weak proxy, largely ignoring evolving uncertainty, and thus missing a key source of LLM interpretability. We propose GRADFILTERING, an objective-agnostic, uncertainty-aware data selection framework that utilizes a small GPT-2 proxy with a LoRA ensemble and aggregates per-example gradients into a Gradient Signal-to-Noise Ratio (G-SNR) utility. Our method matches or surpasses random subsets and strong baselines in most LLM-as-a-judge evaluations as well as in human assessment. Moreover, GRADFILTERING-selected subsets converge faster than competitive filters under the same compute budget, reflecting the benefit of uncertainty-aware scoring.

Keywords

instruction tuninglarge language modelsdata selectiongradient datastoreLoRA ensembleGradient Signal-to-Noise RatioG-SNRuncertainty-aware scoringobjective-agnosticLLM-as-a-judge evaluations

More in Large Language Models

View all
Uncertainty-Aware Gradient Signal-to-Noise Data Selection for Instruction Tuning | Paperchime