Large Language Models

Latent Chain-of-Thought as Planning: Decoupling Reasoning from Verbalization

JJiecong WangHHao PengCChunyang Liu
Published
January 29, 2026
Authors
3
Word Count
8,531
Code
Includes code

Dynamic reasoning framework boosts problem-solving diversity.

Abstract

Chain-of-Thought (CoT) empowers Large Language Models (LLMs) to tackle complex problems, but remains constrained by the computational cost and reasoning path collapse when grounded in discrete token spaces. Recent latent reasoning approaches attempt to optimize efficiency by performing reasoning within continuous hidden states. However, these methods typically operate as opaque end-to-end mappings from explicit reasoning steps to latent states, and often require a pre-defined number of latent steps during inference. In this work, we introduce PLaT (Planning with Latent Thoughts), a framework that reformulates latent reasoning as planning by fundamentally decouple reasoning from verbalization. We model reasoning as a deterministic trajectory of latent planning states, while a separate Decoder grounds these thoughts into text when necessary. This decoupling allows the model to dynamically determine when to terminate reasoning rather than relying on fixed hyperparameters. Empirical results on mathematical benchmarks reveal a distinct trade-off: while PLaT achieves lower greedy accuracy than baselines, it demonstrates superior scalability in terms of reasoning diversity. This indicates that PLaT learns a robust, broader solution space, offering a transparent and scalable foundation for inference-time search.

Key Takeaways

  • 1

    PLaT decouples reasoning from verbalization for dynamic problem-solving.

  • 2

    PLaT shows superior scalability and reasoning diversity.

  • 3

    PLaT uses an Exponential Moving Average for temporal memory.

Limitations

  • PLaT sacrifices some deterministic accuracy for greater diversity.

  • Computational requirements are still significant despite speedup.

Keywords

Chain-of-ThoughtLarge Language Modelslatent reasoningdiscrete token spacescontinuous hidden statesdeterministic trajectorylatent planning statesinference-time search

More in Large Language Models

View all
Latent Chain-of-Thought as Planning: Decoupling Reasoning from Verbalization | Paperchime