Large Language Models

LK Losses: Direct Acceptance Rate Optimization for Speculative Decoding

AAlexander SamarinSSergei KrutikovAAnton ShevtsovSSergei SkvortsovFFilipp FisinAAlexander Golubev
Published
February 27, 2026
Authors
6
Word Count
7,691

LK losses directly optimize acceptance rate in speculative decoding, improving inference speed by 8-10%.

Abstract

Speculative decoding accelerates autoregressive large language model (LLM) inference by using a lightweight draft model to propose candidate tokens that are then verified in parallel by the target model. The speedup is significantly determined by the acceptance rate, yet standard training minimizes Kullback-Leibler (KL) divergence as a proxy objective. While KL divergence and acceptance rate share the same global optimum, small draft models, having limited capacity, typically converge to suboptimal solutions where minimizing KL does not guarantee maximizing acceptance rate. To address this issue, we propose LK losses, special training objectives that directly target acceptance rate. Comprehensive experiments across four draft architectures and six target models, ranging from 8B to 685B parameters, demonstrate consistent improvements in acceptance metrics across all configurations compared to the standard KL-based training. We evaluate our approach on general, coding and math domains and report gains of up to 8-10% in average acceptance length. LK losses are easy to implement, introduce no computational overhead and can be directly integrated into any existing speculator training framework, making them a compelling alternative to the existing draft training objectives.

Key Takeaways

  • 1

    LK losses directly optimize acceptance rate instead of using KL divergence as a proxy for draft model training.

  • 2

    Draft models with limited capacity converge to suboptimal solutions where KL divergence minimization doesn't guarantee acceptance rate maximization.

  • 3

    LK losses achieve 8-10% improvements in inference speed with no additional computational cost across multiple architectures and target models.

Limitations

  • Draft models have severe capacity constraints, typically only 1-5% of target model parameters, limiting their ability to match target distributions.

  • Prior work acknowledged the gap between KL divergence and acceptance rate but never explored direct optimization until this paper.

Keywords

speculative decodingautoregressive large language modeldraft modelcandidate tokenstarget modelacceptance rateKullback-Leibler divergenceLK lossestraining objectivesmodel capacity

More in Large Language Models

View all
LK Losses: Direct Acceptance Rate Optimization for Speculative Decoding | Paperchime