Multimodal AI

Imagination Helps Visual Reasoning, But Not Yet in Latent Space

YYou LiCChi ChenYYanghao LiFFanhu ZengKKaiyu HuangJJinan XuMMaosong Sun
Published
February 26, 2026
Authors
7
Word Count
7,874

Latent visual reasoning doesn't work as intended; explicit text imagination is more effective.

Abstract

Latent visual reasoning aims to mimic human's imagination process by meditating through hidden states of Multimodal Large Language Models. While recognized as a promising paradigm for visual reasoning, the underlying mechanisms driving its effectiveness remain unclear. Motivated to demystify the true source of its efficacy, we investigate the validity of latent reasoning using Causal Mediation Analysis. We model the process as a causal chain: the input as the treatment, the latent tokens as the mediator, and the final answer as the outcome. Our findings uncover two critical disconnections: (a) Input-Latent Disconnect: dramatic perturbations on the input result in negligible changes to the latent tokens, suggesting that latent tokens do not effectively attend to the input sequence. (b) Latent-Answer Disconnect: perturbations on the latent tokens yield minimal impact on the final answer, indicating the limited causal effect latent tokens imposing on the outcome. Furthermore, extensive probing analysis reveals that latent tokens encode limited visual information and exhibit high similarity. Consequently, we challenge the necessity of latent reasoning and propose a straightforward alternative named CapImagine, which teaches the model to explicitly imagine using text. Experiments on vision-centric benchmarks show that CapImagine significantly outperforms complex latent-space baselines, highlighting the superior potential of visual reasoning through explicit imagination.

Key Takeaways

  • 1

    Latent tokens in visual reasoning models show high homogeneity across different inputs, failing to encode task-specific visual information meaningfully.

  • 2

    Causal mediation analysis reveals latent tokens neither respond to input changes nor causally affect final answers, questioning their reasoning role.

  • 3

    Text-space imagination (CapImagine) outperforms latent-space methods by explicitly teaching models to imagine through language rather than hidden states.

Limitations

  • Study focuses on specific latent reasoning methods; findings may not generalize to all visual reasoning approaches or model architectures.

  • CapImagine evaluation uses same training data source as Monet baseline, potentially limiting scope of comparative analysis across diverse datasets.

Keywords

Multimodal Large Language Modelscausal mediation analysislatent tokensvisual reasoninginput-latent disconnectlatent-answer disconnectCapImagine

More in Multimodal AI

View all
Imagination Helps Visual Reasoning, But Not Yet in Latent Space | Paperchime