Computer Vision

Concept-Guided Fine-Tuning: Steering ViTs away from Spurious Correlations to Improve Robustness

YYehonatan ElishaOOren BarkanNNoam Koenigstein
Published
March 9, 2026
Authors
3
Word Count
10,149

Automatic concept-guided fine-tuning steers vision transformers toward semantic reasoning and improves robustness.

Abstract

Vision Transformers (ViTs) often degrade under distribution shifts because they rely on spurious correlations, such as background cues, rather than semantically meaningful features. Existing regularization methods, typically relying on simple foreground-background masks, which fail to capture the fine-grained semantic concepts that define an object (e.g., ``long beak'' and ``wings'' for a ``bird''). As a result, these methods provide limited robustness to distribution shifts. To address this limitation, we introduce a novel finetuning framework that steers model reasoning toward concept-level semantics. Our approach optimizes the model's internal relevance maps to align with spatially grounded concept masks. These masks are generated automatically, without manual annotation: class-relevant concepts are first proposed using an LLM-based, label-free method, and then segmented using a VLM. The finetuning objective aligns relevance with these concept regions while simultaneously suppressing focus on spurious background areas. Notably, this process requires only a minimal set of images and uses half of the dataset classes. Extensive experiments on five out-of-distribution benchmarks demonstrate that our method improves robustness across multiple ViT-based models. Furthermore, we show that the resulting relevance maps exhibit stronger alignment with semantic object parts, offering a scalable path toward more robust and interpretable vision models. Finally, we confirm that concept-guided masks provide more effective supervision for model robustness than conventional segmentation maps, supporting our central hypothesis.

Key Takeaways

  • 1

    Vision Transformers rely on spurious correlations like background textures instead of learning semantic object features, causing poor generalization.

  • 2

    Concept-Guided Fine-Tuning uses LLMs and vision-language models to automatically identify and localize semantic concepts without manual annotation.

  • 3

    CFT improves robustness on out-of-distribution benchmarks using only 1,500 images and half of ImageNet classes, generalizing to unseen classes.

Limitations

  • Method requires vision-language grounding models like Grounded-SAM, adding computational overhead to the fine-tuning pipeline.

  • Validation threshold for concept presence (15%) and coverage (20%) are fixed hyperparameters that may not generalize across all domains.

Keywords

Vision Transformersdistribution shiftsspurious correlationsregularization methodsforeground-background masksconcept-level semanticsrelevance mapsconcept maskslarge language modelsvision-language modelsfine-tuning frameworkout-of-distribution benchmarkssemantic object parts

More in Computer Vision

View all
Concept-Guided Fine-Tuning: Steering ViTs away from Spurious Correlations to Improve Robustness | Paperchime