Robotics & Embodied AI

TOPReward: Token Probabilities as Hidden Zero-Shot Rewards for Robotics

SShirui ChenCCole HarrisonYYing-Chun LeeAAngela Jin YangZZhongzheng RenLLillian J. RatliffJJiafei DuanDDieter FoxRRanjay Krishna
Published
February 22, 2026
Authors
9

Abstract

While Vision-Language-Action (VLA) models have seen rapid progress in pretraining, their advancement in Reinforcement Learning (RL) remains hampered by low sample efficiency and sparse rewards in real-world settings. Developing generalizable process reward models is essential for providing the fine-grained feedback necessary to bridge this gap, yet existing temporal value functions often fail to generalize beyond their training domains. We introduce TOPReward, a novel, probabilistically grounded temporal value function that leverages the latent world knowledge of pretrained video Vision-Language Models (VLMs) to estimate robotic task progress. Unlike prior methods that prompt VLMs to directly output progress values, which are prone to numerical misrepresentation, TOPReward extracts task progress directly from the VLM's internal token logits. In zero-shot evaluations across 130+ distinct real-world tasks and multiple robot platforms (e.g., Franka, YAM, SO-100/101), TOPReward achieves 0.947 mean Value-Order Correlation (VOC) on Qwen3-VL, dramatically outperforming the state-of-the-art GVL baseline which achieves near-zero correlation on the same open-source model. We further demonstrate that TOPReward serves as a versatile tool for downstream applications, including success detection and reward-aligned behavior cloning.

Keywords

Vision-Language-Action modelsReinforcement Learningtemporal value functionsVision-Language Modelstoken logitsValue-Order Correlationreward-aligned behavior cloning

More in Robotics & Embodied AI

View all
TOPReward: Token Probabilities as Hidden Zero-Shot Rewards for Robotics | Paperchime