Large Language Models

Latent Thoughts Tuning: Bridging Context and Reasoning with Fused Information in Latent Tokens

WWeihao LiuDDehai MinLLu Cheng
Published
February 10, 2026
Authors
3
Word Count
8,188
Code
Includes code

LT-Tuning lets language models reason silently in hidden space instead of narrating every thought.

Abstract

While explicit Chain-of-Thought (CoT) equips Large Language Models (LLMs) with strong reasoning capabilities, it requires models to verbalize every intermediate step in text tokens, constraining the model thoughts to the discrete vocabulary space. Recently, reasoning in continuous latent space has emerged as a promising alternative, enabling more robust inference and flexible computation beyond discrete token constraints. However, current latent paradigms often suffer from feature collapse and instability, stemming from distribution mismatches when recurrently using hidden states as the input embeddings, or alignment issues when relying on assistant models. To address this, we propose Latent Thoughts Tuning (LT-Tuning), a framework that redefines how latent thoughts are constructed and deployed. Instead of relying solely on raw hidden states, our method introduces a Context-Prediction-Fusion mechanism that jointly leveraging contextual hidden states and predictive semantic guidance from the vocabulary embedding space. Combined with a progressive three-stage curriculum learning pipeline, LT-Tuning also enables dynamically switching between latent and explicit thinking modes. Experiments demonstrate that our method outperforms existing latent reasoning baselines, effectively mitigating feature collapse and achieving robust reasoning accuracy.

Key Takeaways

  • 1

    Chain-of-thought reasoning forces models to externalize all thoughts as text, which is computationally expensive and constraining.

  • 2

    Previous latent reasoning approaches failed due to distribution mismatch between hidden states and input embedding spaces.

  • 3

    LT-Tuning enables models to reason in continuous hidden space without forcing intermediate thoughts into discrete tokens.

Limitations

  • Coconut approach suffers from feature collapse and severe performance degradation on larger language models.

  • Soft-Thinking method discards contextual history encoded in hidden states despite maintaining embedding space compatibility.

Keywords

Chain-of-ThoughtLarge Language Modelslatent spacehidden statesvocabulary embedding spaceContext-Prediction-Fusioncurriculum learningfeature collapsereasoning accuracy

More in Large Language Models

View all
Latent Thoughts Tuning: Bridging Context and Reasoning with Fused Information in Latent Tokens | Paperchime