TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-task Collaborative Network for Joint Referring Expre...

Multi-task Collaborative Network for Joint Referring Expression Comprehension and Segmentation

Gen Luo, Yiyi Zhou, Xiaoshuai Sun, Liujuan Cao, Chenglin Wu, Cheng Deng, Rongrong Ji

2020-03-19CVPR 2020 6Referring ExpressionGeneralized Referring Expression ComprehensionReferring Expression ComprehensionReferring Expression Segmentation
PaperPDFCode(official)Code

Abstract

Referring expression comprehension (REC) and segmentation (RES) are two highly-related tasks, which both aim at identifying the referent according to a natural language expression. In this paper, we propose a novel Multi-task Collaborative Network (MCN) to achieve a joint learning of REC and RES for the first time. In MCN, RES can help REC to achieve better language-vision alignment, while REC can help RES to better locate the referent. In addition, we address a key challenge in this multi-task setup, i.e., the prediction conflict, with two innovative designs namely, Consistency Energy Maximization (CEM) and Adaptive Soft Non-Located Suppression (ASNLS). Specifically, CEM enables REC and RES to focus on similar visual regions by maximizing the consistency energy between two tasks. ASNLS supresses the response of unrelated regions in RES based on the prediction of REC. To validate our model, we conduct extensive experiments on three benchmark datasets of REC and RES, i.e., RefCOCO, RefCOCO+ and RefCOCOg. The experimental results report the significant performance gains of MCN over all existing methods, i.e., up to +7.13% for REC and +11.50% for RES over SOTA, which well confirm the validity of our model for joint REC and RES learning.

Results

TaskDatasetMetricValueModel
Generalized Referring Expression ComprehensiongRefCOCON-acc.30.6MCN
Generalized Referring Expression ComprehensiongRefCOCOPrecision@(F1=1, IoU≥0.5)28MCN

Related Papers

DeRIS: Decoupling Perception and Cognition for Enhanced Referring Image Segmentation through Loopback Synergy2025-07-02Mask-aware Text-to-Image Retrieval: Referring Expression Segmentation Meets Cross-modal Retrieval2025-06-28Detecting Referring Expressions in Visually Grounded Dialogue with Autoregressive Language Models2025-06-26Referring Expression Instance Retrieval and A Strong End-to-End Baseline2025-06-23Gondola: Grounded Vision Language Planning for Generalizable Robotic Manipulation2025-06-12Synthetic Visual Genome2025-06-09From Objects to Anywhere: A Holistic Benchmark for Multi-level Visual Grounding in 3D Scenes2025-06-05Refer to Anything with Vision-Language Prompts2025-06-05