Multimodal AI

A Pragmatic VLA Foundation Model

WWei WuFFan LuYYunnan WangSShuai YangSShi LiuFFangjing WangQQian ZhuHHe SunYYong WangSShuailei MaYYiyu RenKKejia ZhangHHui YuJJingmei ZhaoSShuai ZhouZZhenqi QiuHHoulong XiongZZiyu WangZZechen WangRRan ChengYYong-Lu LiYYongtao HuangXXing ZhuYYujun ShenKKecheng Zheng
Published
January 26, 2026
Authors
25
Word Count
13,852

Efficient VLA model for real-world robotic manipulation.

Abstract

Offering great potential in robotic manipulation, a capable Vision-Language-Action (VLA) foundation model is expected to faithfully generalize across tasks and platforms while ensuring cost efficiency (e.g., data and GPU hours required for adaptation). To this end, we develop LingBot-VLA with around 20,000 hours of real-world data from 9 popular dual-arm robot configurations. Through a systematic assessment on 3 robotic platforms, each completing 100 tasks with 130 post-training episodes per task, our model achieves clear superiority over competitors, showcasing its strong performance and broad generalizability. We have also built an efficient codebase, which delivers a throughput of 261 samples per second per GPU with an 8-GPU training setup, representing a 1.5~2.8times (depending on the relied VLM base model) speedup over existing VLA-oriented codebases. The above features ensure that our model is well-suited for real-world deployment. To advance the field of robot learning, we provide open access to the code, base model, and benchmark data, with a focus on enabling more challenging tasks and promoting sound evaluation standards.

Key Takeaways

  • 1

    LingBot-VLA scales well with real-world data.

  • 2

    Incorporates depth information for better spatial awareness.

  • 3

    Outperforms state-of-the-art baselines on multiple benchmarks.

Limitations

  • Requires extensive real-world data for training.

  • Complex architecture may be computationally intensive.

Keywords

Vision-Language-Actionreal-world datarobotic manipulationgeneralizationefficient codebasethroughputVLA-oriented codebases

More in Multimodal AI

View all
A Pragmatic VLA Foundation Model | Paperchime