TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A Closer Look at the Explainability of Contrastive Languag...

A Closer Look at the Explainability of Contrastive Language-Image Pre-training

Yi Li, Hualiang Wang, Yiqun Duan, Jiheng Zhang, Xiaomeng Li

2023-04-12Interactive SegmentationOpen Vocabulary Semantic SegmentationZero Shot SegmentationSegmentationSemantic SegmentationOpen-Vocabulary Semantic SegmentationZero-Shot LearningLanguage Modelling
PaperPDFCodeCode(official)

Abstract

Contrastive language-image pre-training (CLIP) is a powerful vision-language model that has shown great benefits for various tasks. However, we have identified some issues with its explainability, which undermine its credibility and limit the capacity for related tasks. Specifically, we find that CLIP tends to focus on background regions rather than foregrounds, with noisy activations at irrelevant positions on the visualization results. These phenomena conflict with conventional explainability methods based on the class attention map (CAM), where the raw model can highlight the local foreground regions using global supervision without alignment. To address these problems, we take a closer look at its architecture and features. Based on thorough analyses, we find the raw self-attentions link to inconsistent semantic regions, resulting in the opposite visualization. Besides, the noisy activations are owing to redundant features among categories. Building on these insights, we propose the CLIP Surgery for reliable CAM, a method that allows surgery-like modifications to the inference architecture and features, without further fine-tuning as classical CAM methods. This approach significantly improves the explainability of CLIP, surpassing existing methods by large margins. Besides, it enables multimodal visualization and extends the capacity of raw CLIP on open-vocabulary tasks without extra alignment. The code is available at https://github.com/xmed-lab/CLIP_Surgery.

Results

TaskDatasetMetricValueModel
Zero Shot SegmentationADE20K training-free zero-shot segmentationmIoU12.9CLIPSurgery
Open Vocabulary Semantic SegmentationCOCO-Stuff-171mIoU21.9CLIP Surgery (original CLIP without any fine-tuning)
Open Vocabulary Semantic SegmentationCityscapesmIoU31.4CLIP Surgery (CLIP without any fine-tuning)
Open Vocabulary Semantic SegmentationPASCAL Context-59mIoU29.3CLIP Surgery (original CLIP without any fine-tuning)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation2025-07-17Unleashing Vision Foundation Models for Coronary Artery Segmentation: Parallel ViT-CNN Encoding and Variational Fusion2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17