Multimodal AI

Adapting Vision-Language Models for E-commerce Understanding at Scale

MMatteo NulliVVladimir OrshulevichTTala BazazoCChristian HeroldMMichael KozielskiMMarcin MazurSSzymon TuzelCCees G. M. SnoekSSeyyed Hadi HashemiOOmar JavedYYannick VersleySShahram Khadivi
Published
February 12, 2026
Authors
12

Abstract

E-commerce product understanding demands by nature, strong multimodal comprehension from text, images, and structured attributes. General-purpose Vision-Language Models (VLMs) enable generalizable multimodal latent modelling, yet there is no documented, well-known strategy for adapting them to the attribute-centric, multi-image, and noisy nature of e-commerce data, without sacrificing general performance. In this work, we show through a large-scale experimental study, how targeted adaptation of general VLMs can substantially improve e-commerce performance while preserving broad multimodal capabilities. Furthermore, we propose a novel extensive evaluation suite covering deep product understanding, strict instruction following, and dynamic attribute extraction.

Keywords

Vision-Language Modelsmultimodal comprehensione-commerce dataattribute-centricmulti-imagenoisy datageneralizable multimodal latent modellingtargeted adaptationdeep product understandingstrict instruction followingdynamic attribute extraction

More in Multimodal AI

View all
Adapting Vision-Language Models for E-commerce Understanding at Scale | Paperchime