Multimodal AI

How Well Do Models Follow Visual Instructions? VIBE: A Systematic Benchmark for Visual Instruction-Driven Image Editing

HHuanyu ZhangXXuehai BaiCChengzu LiCChen LiangHHaochen TianHHaodong LiRRuichuan AnYYifan ZhangAAnna KorhonenZZhang ZhangLLiang WangTTieniu Tan
Published
February 2, 2026
Authors
12

Abstract

Recent generative models have achieved remarkable progress in image editing. However, existing systems and benchmarks remain largely text-guided. In contrast, human communication is inherently multimodal, where visual instructions such as sketches efficiently convey spatial and structural intent. To address this gap, we introduce VIBE, the Visual Instruction Benchmark for Image Editing with a three-level interaction hierarchy that captures deictic grounding, morphological manipulation, and causal reasoning. Across these levels, we curate high-quality and diverse test cases that reflect progressively increasing complexity in visual instruction following. We further propose a robust LMM-as-a-judge evaluation framework with task-specific metrics to enable scalable and fine-grained assessment. Through a comprehensive evaluation of 17 representative open-source and proprietary image editing models, we find that proprietary models exhibit early-stage visual instruction-following capabilities and consistently outperform open-source models. However, performance degrades markedly with increasing task difficulty even for the strongest systems, highlighting promising directions for future research.

Keywords

generative modelsimage editingvisual instruction followingdeictic groundingmorphological manipulationcausal reasoningLMM-as-a-judge evaluation frameworktask-specific metricsvisual instruction benchmark

More in Multimodal AI

View all
How Well Do Models Follow Visual Instructions? VIBE: A Systematic Benchmark for Visual Instruction-Driven Image Editing | Paperchime