Multimodal AI

OpenVision 3: A Family of Unified Visual Encoder for Both Understanding and Generation

LLetian ZhangSSucheng RenYYanqing LiuXXianhang LiZZeyu WangYYuyin ZhouHHuaxiu YaoZZeyu ZhengWWeili NieGGuilin LiuZZhiding YuCCihang Xie
arXiv ID
2601.15369
Published
January 21, 2026
Authors
12
Hugging Face Likes
16
Comments
3

Abstract

This paper presents a family of advanced vision encoder, named OpenVision 3, that learns a single, unified visual representation that can serve both image understanding and image generation. Our core architecture is simple: we feed VAE-compressed image latents to a ViT encoder and train its output to support two complementary roles. First, the encoder output is passed to the ViT-VAE decoder to reconstruct the original image, encouraging the representation to capture generative structure. Second, the same representation is optimized with contrastive learning and image-captioning objectives, strengthening semantic features. By jointly optimizing reconstruction- and semantics-driven signals in a shared latent space, the encoder learns representations that synergize and generalize well across both regimes. We validate this unified design through extensive downstream evaluations with the encoder frozen. For multimodal understanding, we plug the encoder into the LLaVA-1.5 framework: it performs comparably with a standard CLIP vision encoder (e.g., 62.4 vs 62.2 on SeedBench, and 83.7 vs 82.9 on POPE). For generation, we test it under the RAE framework: ours substantially surpasses the standard CLIP-based encoder (e.g., gFID: 1.89 vs 2.54 on ImageNet). We hope this work can spur future research on unified modeling.

Keywords

vision encoderVAE-compressed image latentsViT encoderViT-VAE decodercontrastive learningimage-captioning objectivesshared latent spacemultimodal understandingLLaVA-1.5 frameworkRAE frameworkunified modeling

More in Multimodal AI

View all
OpenVision 3: A Family of Unified Visual Encoder for Both Understanding and Generation | Paperchime