TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Cross-modal Context Graph for Visual Grounding

Learning Cross-modal Context Graph for Visual Grounding

Yongfei Liu, Bo Wan, Xiaodan Zhu, Xuming He

2019-11-20Visual GroundingGraph Matching
PaperPDFCode(official)Code

Abstract

Visual grounding is a ubiquitous building block in many vision-language tasks and yet remains challenging due to large variations in visual and linguistic features of grounding entities, strong context effect and the resulting semantic ambiguities. Prior works typically focus on learning representations of individual phrases with limited context information. To address their limitations, this paper proposes a language-guided graph representation to capture the global context of grounding entities and their relations, and develop a cross-modal graph matching strategy for the multiple-phrase visual grounding task. In particular, we introduce a modular graph neural network to compute context-aware representations of phrases and object proposals respectively via message propagation, followed by a graph-based matching module to generate globally consistent localization of grounding phrases. We train the entire graph neural network jointly in a two-stage strategy and evaluate it on the Flickr30K Entities benchmark. Extensive experiments show that our method outperforms the prior state of the arts by a sizable margin, evidencing the efficacy of our grounding framework. Code is available at "https://github.com/youngfly11/LCMCG-PyTorch".

Results

TaskDatasetMetricValueModel
Phrase GroundingFlickr30k Entities TestR@176.74LCMCG

Related Papers

ViewSRD: 3D Visual Grounding via Structured Multi-View Decomposition2025-07-15VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation2025-07-09A Neural Representation Framework with LLM-Driven Spatial Reasoning for Open-Vocabulary 3D Visual Grounding2025-07-09High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning2025-07-08GTA1: GUI Test-time Scaling Agent2025-07-08DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World2025-06-30SPAZER: Spatial-Semantic Progressive Reasoning Agent for Zero-shot 3D Visual Grounding2025-06-27GroundFlow: A Plug-in Module for Temporal Reasoning on 3D Point Cloud Sequential Grounding2025-06-26