TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Equivariant Similarity for Vision-Language Foundation Models

Equivariant Similarity for Vision-Language Foundation Models

Tan Wang, Kevin Lin, Linjie Li, Chung-Ching Lin, Zhengyuan Yang, Hanwang Zhang, Zicheng Liu, Lijuan Wang

2023-03-25ICCV 2023 1Image-text RetrievalText Retrievaltext similarityVisual ReasoningRetrieval
PaperPDFCode(official)

Abstract

This study explores the concept of equivariance in vision-language foundation models (VLMs), focusing specifically on the multimodal similarity function that is not only the major training objective but also the core delivery to support downstream tasks. Unlike the existing image-text similarity objective which only categorizes matched pairs as similar and unmatched pairs as dissimilar, equivariance also requires similarity to vary faithfully according to the semantic changes. This allows VLMs to generalize better to nuanced and unseen multimodal compositions. However, modeling equivariance is challenging as the ground truth of semantic change is difficult to collect. For example, given an image-text pair about a dog, it is unclear to what extent the similarity changes when the pixel is changed from dog to cat? To this end, we propose EqSim, a regularization loss that can be efficiently calculated from any two matched training pairs and easily pluggable into existing image-text retrieval fine-tuning. Meanwhile, to further diagnose the equivariance of VLMs, we present a new challenging benchmark EqBen. Compared to the existing evaluation sets, EqBen is the first to focus on "visual-minimal change". Extensive experiments show the lack of equivariance in current VLMs and validate the effectiveness of EqSim. Code is available at https://github.com/Wangt-CN/EqBen.

Results

TaskDatasetMetricValueModel
Visual ReasoningWinogroundGroup Score27.5FIBER (EqSim)
Visual ReasoningWinogroundImage Score32FIBER (EqSim)
Visual ReasoningWinogroundText Score51.5FIBER (EqSim)
Visual ReasoningWinogroundGroup Score23FIBER (finetuned, Flickr30k)
Visual ReasoningWinogroundImage Score26.5FIBER (finetuned, Flickr30k)
Visual ReasoningWinogroundText Score51.25FIBER (finetuned, Flickr30k)
Visual ReasoningWinogroundGroup Score22.25FIBER
Visual ReasoningWinogroundImage Score25.75FIBER
Visual ReasoningWinogroundText Score46.25FIBER
Visual ReasoningWinogroundGroup Score18.75METER (EqSim)
Visual ReasoningWinogroundImage Score22.75METER (EqSim)
Visual ReasoningWinogroundText Score45METER (EqSim)
Visual ReasoningWinogroundGroup Score14.75METER (finetuned, Flickr30k)
Visual ReasoningWinogroundImage Score20.75METER (finetuned, Flickr30k)
Visual ReasoningWinogroundText Score43.5METER (finetuned, Flickr30k)
Visual ReasoningWinogroundGroup Score12METER
Visual ReasoningWinogroundImage Score15.75METER
Visual ReasoningWinogroundText Score39.25METER

Related Papers

LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16Context-Aware Search and Retrieval Over Erasure Channels2025-07-16