Multimodal AI

UniT: Unified Multimodal Chain-of-Thought Test-time Scaling

LLeon Liangyu ChenHHaoyu MaZZhipeng FanZZiqi HuangAAnimesh SinhaXXiaoliang DaiJJialiang WangZZecheng HeJJianwei YangCChunyuan LiJJunzhe SunCChu WangSSerena Yeung-LevyFFelix Juefei-Xu
Published
February 12, 2026
Authors
14
Word Count
9,048

UniT brings chain-of-thought reasoning to multimodal AI through unified test-time scaling.

Abstract

Unified models can handle both multimodal understanding and generation within a single architecture, yet they typically operate in a single pass without iteratively refining their outputs. Many multimodal tasks, especially those involving complex spatial compositions, multiple interacting objects, or evolving instructions, require decomposing instructions, verifying intermediate results, and making iterative corrections. While test-time scaling (TTS) has demonstrated that allocating additional inference compute for iterative reasoning substantially improves language model performance, extending this paradigm to unified multimodal models remains an open challenge. We introduce UniT, a framework for multimodal chain-of-thought test-time scaling that enables a single unified model to reason, verify, and refine across multiple rounds. UniT combines agentic data synthesis, unified model training, and flexible test-time inference to elicit cognitive behaviors including verification, subgoal decomposition, and content memory. Our key findings are: (1) unified models trained on short reasoning trajectories generalize to longer inference chains at test time; (2) sequential chain-of-thought reasoning provides a more scalable and compute-efficient TTS strategy than parallel sampling; (3) training on generation and editing trajectories improves out-of-distribution visual reasoning. These results establish multimodal test-time scaling as an effective paradigm for advancing both generation and understanding in unified models.

Key Takeaways

  • 1

    UniT extends test-time scaling from text to multimodal models, enabling iterative visual reasoning and refinement.

  • 2

    The framework integrates image generation, vision-language verification, and editing into a unified system for improved outputs.

  • 3

    Agentic data synthesis automatically generates training data through orchestrated workflows of specialized models working together.

Limitations

  • Current multimodal systems lack iterative refinement capabilities that humans naturally use to improve visual outputs.

  • Specialized models have conflicting training objectives and assumptions, making unified integration architecturally challenging.

Keywords

unified modelsmultimodal understandingmultimodal generationtest-time scalingchain-of-thought reasoningagentic data synthesisunified model trainingtest-time inferencecognitive behaviorsvisual reasoning

More in Multimodal AI

View all
UniT: Unified Multimodal Chain-of-Thought Test-time Scaling | Paperchime