TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MAttNet: Modular Attention Network for Referring Expressio...

MAttNet: Modular Attention Network for Referring Expression Comprehension

Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, Tamara L. Berg

2018-01-24CVPR 2018 6Referring ExpressionReferring Expression ComprehensionGeneralized Referring Expression SegmentationReferring Expression Segmentation
PaperPDFCode(official)

Abstract

In this paper, we address referring expression comprehension: localizing an image region described by a natural language expression. While most recent work treats expressions as a single unit, we propose to decompose them into three modular components related to subject appearance, location, and relationship to other objects. This allows us to flexibly adapt to expressions containing different types of information in an end-to-end framework. In our model, which we call the Modular Attention Network (MAttNet), two types of attention are utilized: language-based attention that learns the module weights as well as the word/phrase attention that each module should focus on; and visual attention that allows the subject and relationship modules to focus on relevant image components. Module weights combine scores from all three modules dynamically to output an overall score. Experiments show that MAttNet outperforms previous state-of-art methods by a large margin on both bounding-box-level and pixel-level comprehension tasks. Demo and code are provided.

Results

TaskDatasetMetricValueModel
Instance SegmentationRefCoCo valOverall IoU56.51MattNet
Instance SegmentationRefCOCO+ valOverall IoU46.67MattNet
Instance SegmentationRefCOCO+ test BOverall IoU40.08MattNet
Instance SegmentationRefCOCO+ testAOverall IoU52.39MattNet
Instance SegmentationgRefCOCOcIoU47.51MattNet
Instance SegmentationgRefCOCOgIoU48.24MattNet
Referring Expression SegmentationRefCoCo valOverall IoU56.51MattNet
Referring Expression SegmentationRefCOCO+ valOverall IoU46.67MattNet
Referring Expression SegmentationRefCOCO+ test BOverall IoU40.08MattNet
Referring Expression SegmentationRefCOCO+ testAOverall IoU52.39MattNet
Referring Expression SegmentationgRefCOCOcIoU47.51MattNet
Referring Expression SegmentationgRefCOCOgIoU48.24MattNet

Related Papers

DeRIS: Decoupling Perception and Cognition for Enhanced Referring Image Segmentation through Loopback Synergy2025-07-02Mask-aware Text-to-Image Retrieval: Referring Expression Segmentation Meets Cross-modal Retrieval2025-06-28Detecting Referring Expressions in Visually Grounded Dialogue with Autoregressive Language Models2025-06-26Referring Expression Instance Retrieval and A Strong End-to-End Baseline2025-06-23Gondola: Grounded Vision Language Planning for Generalizable Robotic Manipulation2025-06-12Synthetic Visual Genome2025-06-09From Objects to Anywhere: A Holistic Benchmark for Multi-level Visual Grounding in 3D Scenes2025-06-05Refer to Anything with Vision-Language Prompts2025-06-05