TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Language as Queries for Referring Video Object Segmentation

Language as Queries for Referring Video Object Segmentation

Jiannan Wu, Yi Jiang, Peize Sun, Zehuan Yuan, Ping Luo

2022-01-03CVPR 2022 1Referring Video Object SegmentationReferring Expression SegmentationSemantic SegmentationVideo Object SegmentationObject TrackingVideo Semantic SegmentationVideo Instance Segmentation
PaperPDFCode(official)

Abstract

Referring video object segmentation (R-VOS) is an emerging cross-modal task that aims to segment the target object referred by a language expression in all video frames. In this work, we propose a simple and unified framework built upon Transformer, termed ReferFormer. It views the language as queries and directly attends to the most relevant regions in the video frames. Concretely, we introduce a small set of object queries conditioned on the language as the input to the Transformer. In this manner, all the queries are obligated to find the referred objects only. They are eventually transformed into dynamic kernels which capture the crucial object-level information, and play the role of convolution filters to generate the segmentation masks from feature maps. The object tracking is achieved naturally by linking the corresponding queries across frames. This mechanism greatly simplifies the pipeline and the end-to-end framework is significantly different from the previous methods. Extensive experiments on Ref-Youtube-VOS, Ref-DAVIS17, A2D-Sentences and JHMDB-Sentences show the effectiveness of ReferFormer. On Ref-Youtube-VOS, Refer-Former achieves 55.6J&F with a ResNet-50 backbone without bells and whistles, which exceeds the previous state-of-the-art performance by 8.4 points. In addition, with the strong Swin-Large backbone, ReferFormer achieves the best J&F of 64.2 among all existing methods. Moreover, we show the impressive results of 55.0 mAP and 43.7 mAP on A2D-Sentences andJHMDB-Sentences respectively, which significantly outperforms the previous methods by a large margin. Code is publicly available at https://github.com/wjn922/ReferFormer.

Results

TaskDatasetMetricValueModel
VideoReVOSF29.9ReferFormer (Video-Swin-B)
VideoReVOSJ26.2ReferFormer (Video-Swin-B)
VideoReVOSJ&F28.1ReferFormer (Video-Swin-B)
VideoReVOSR8.8ReferFormer (Video-Swin-B)
VideoMeViSF32.2ReferFormer
VideoMeViSJ29.8ReferFormer
VideoMeViSJ&F31ReferFormer
VideoRefer-YouTube-VOSF64.6ReferFormer (Large)
VideoRefer-YouTube-VOSJ61.3ReferFormer (Large)
VideoRefer-YouTube-VOSJ&F62.9ReferFormer (Large)
VideoRef-DAVIS17F64.1ReferFormer
VideoRef-DAVIS17J58.1ReferFormer
VideoRef-DAVIS17J&F61.1ReferFormer
Instance SegmentationRefer-YouTube-VOS (2021 public validation)F58.4ReferFormer (ResNet-101)
Instance SegmentationRefer-YouTube-VOS (2021 public validation)J56.1ReferFormer (ResNet-101)
Instance SegmentationRefer-YouTube-VOS (2021 public validation)J&F57.3ReferFormer (ResNet-101)
Instance SegmentationRefer-YouTube-VOS (2021 public validation)F56.6ReferFormer (ResNet-50)
Instance SegmentationRefer-YouTube-VOS (2021 public validation)J54.8ReferFormer (ResNet-50)
Instance SegmentationRefer-YouTube-VOS (2021 public validation)J&F55.6ReferFormer (ResNet-50)
Instance SegmentationA2D SentencesAP0.55ReferFormer (Video-Swin-B)
Instance SegmentationA2D SentencesIoU mean0.703ReferFormer (Video-Swin-B)
Instance SegmentationA2D SentencesIoU overall0.786ReferFormer (Video-Swin-B)
Instance SegmentationA2D SentencesPrecision@0.50.831ReferFormer (Video-Swin-B)
Instance SegmentationA2D SentencesPrecision@0.60.804ReferFormer (Video-Swin-B)
Instance SegmentationA2D SentencesPrecision@0.70.741ReferFormer (Video-Swin-B)
Instance SegmentationA2D SentencesPrecision@0.80.579ReferFormer (Video-Swin-B)
Instance SegmentationA2D SentencesPrecision@0.90.212ReferFormer (Video-Swin-B)
Instance SegmentationDAVIS 2017 (val)J&F 1st frame61.1ReferFormer
Video Object SegmentationReVOSF29.9ReferFormer (Video-Swin-B)
Video Object SegmentationReVOSJ26.2ReferFormer (Video-Swin-B)
Video Object SegmentationReVOSJ&F28.1ReferFormer (Video-Swin-B)
Video Object SegmentationReVOSR8.8ReferFormer (Video-Swin-B)
Video Object SegmentationMeViSF32.2ReferFormer
Video Object SegmentationMeViSJ29.8ReferFormer
Video Object SegmentationMeViSJ&F31ReferFormer
Video Object SegmentationRefer-YouTube-VOSF64.6ReferFormer (Large)
Video Object SegmentationRefer-YouTube-VOSJ61.3ReferFormer (Large)
Video Object SegmentationRefer-YouTube-VOSJ&F62.9ReferFormer (Large)
Video Object SegmentationRef-DAVIS17F64.1ReferFormer
Video Object SegmentationRef-DAVIS17J58.1ReferFormer
Video Object SegmentationRef-DAVIS17J&F61.1ReferFormer
Referring Expression SegmentationRefer-YouTube-VOS (2021 public validation)F58.4ReferFormer (ResNet-101)
Referring Expression SegmentationRefer-YouTube-VOS (2021 public validation)J56.1ReferFormer (ResNet-101)
Referring Expression SegmentationRefer-YouTube-VOS (2021 public validation)J&F57.3ReferFormer (ResNet-101)
Referring Expression SegmentationRefer-YouTube-VOS (2021 public validation)F56.6ReferFormer (ResNet-50)
Referring Expression SegmentationRefer-YouTube-VOS (2021 public validation)J54.8ReferFormer (ResNet-50)
Referring Expression SegmentationRefer-YouTube-VOS (2021 public validation)J&F55.6ReferFormer (ResNet-50)
Referring Expression SegmentationA2D SentencesAP0.55ReferFormer (Video-Swin-B)
Referring Expression SegmentationA2D SentencesIoU mean0.703ReferFormer (Video-Swin-B)
Referring Expression SegmentationA2D SentencesIoU overall0.786ReferFormer (Video-Swin-B)
Referring Expression SegmentationA2D SentencesPrecision@0.50.831ReferFormer (Video-Swin-B)
Referring Expression SegmentationA2D SentencesPrecision@0.60.804ReferFormer (Video-Swin-B)
Referring Expression SegmentationA2D SentencesPrecision@0.70.741ReferFormer (Video-Swin-B)
Referring Expression SegmentationA2D SentencesPrecision@0.80.579ReferFormer (Video-Swin-B)
Referring Expression SegmentationA2D SentencesPrecision@0.90.212ReferFormer (Video-Swin-B)
Referring Expression SegmentationDAVIS 2017 (val)J&F 1st frame61.1ReferFormer

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17MVA 2025 Small Multi-Object Tracking for Spotting Birds Challenge: Dataset, Methods, and Results2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16YOLOv8-SMOT: An Efficient and Robust Framework for Real-Time Small Object Tracking via Slice-Assisted Training and Adaptive Association2025-07-16