TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LAVT: Language-Aware Vision Transformer for Referring Imag...

LAVT: Language-Aware Vision Transformer for Referring Image Segmentation

Zhao Yang, Jiaqi Wang, Yansong Tang, Kai Chen, Hengshuang Zhao, Philip H. S. Torr

2021-12-04CVPR 2022 1Referring ExpressionGeneralized Referring Expression SegmentationReferring Expression SegmentationSemantic SegmentationImage Segmentation
PaperPDFCode(official)

Abstract

Referring image segmentation is a fundamental vision-language task that aims to segment out an object referred to by a natural language expression from an image. One of the key challenges behind this task is leveraging the referring expression for highlighting relevant positions in the image. A paradigm for tackling this problem is to leverage a powerful vision-language ("cross-modal") decoder to fuse features independently extracted from a vision encoder and a language encoder. Recent methods have made remarkable advancements in this paradigm by exploiting Transformers as cross-modal decoders, concurrent to the Transformer's overwhelming success in many other vision-language tasks. Adopting a different approach in this work, we show that significantly better cross-modal alignments can be achieved through the early fusion of linguistic and visual features in intermediate layers of a vision Transformer encoder network. By conducting cross-modal feature fusion in the visual feature encoding stage, we can leverage the well-proven correlation modeling power of a Transformer encoder for excavating helpful multi-modal context. This way, accurate segmentation results are readily harvested with a light-weight mask predictor. Without bells and whistles, our method surpasses the previous state-of-the-art methods on RefCOCO, RefCOCO+, and G-Ref by large margins.

Results

TaskDatasetMetricValueModel
Instance SegmentationRefCOCOg-testOverall IoU62.09LAVT (Swin-B)
Instance SegmentationRefCOCO+ valOverall IoU62.14LAVT
Instance SegmentationRefCOCO+ test BOverall IoU55.1LAVT
Instance SegmentationRefCOCO+ testAOverall IoU68.38LAVT
Instance SegmentationRefCOCOg-valOverall IoU61.24LAVT
Instance SegmentationgRefCOCOcIoU57.64LAVT
Instance SegmentationgRefCOCOgIoU58.4LAVT
Referring Expression SegmentationRefCOCOg-testOverall IoU62.09LAVT (Swin-B)
Referring Expression SegmentationRefCOCO+ valOverall IoU62.14LAVT
Referring Expression SegmentationRefCOCO+ test BOverall IoU55.1LAVT
Referring Expression SegmentationRefCOCO+ testAOverall IoU68.38LAVT
Referring Expression SegmentationRefCOCOg-valOverall IoU61.24LAVT
Referring Expression SegmentationgRefCOCOcIoU57.64LAVT
Referring Expression SegmentationgRefCOCOgIoU58.4LAVT

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15U-RWKV: Lightweight medical image segmentation with direction-adaptive RWKV2025-07-15