TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Expression Prompt Collaboration Transformer for Universal ...

Expression Prompt Collaboration Transformer for Universal Referring Video Object Segmentation

Jiajun Chen, Jiacheng Lin, Guojin Zhong, Haolong Fu, Ke Nai, Kailun Yang, Zhiyong Li

2023-08-08Referring Video Object SegmentationReferring Expression SegmentationSegmentationSemantic SegmentationVideo Object SegmentationContrastive LearningVideo Semantic Segmentation
PaperPDF

Abstract

Audio-guided Video Object Segmentation (A-VOS) and Referring Video Object Segmentation (R-VOS) are two highly related tasks that both aim to segment specific objects from video sequences according to expression prompts. However, due to the challenges of modeling representations for different modalities, existing methods struggle to strike a balance between interaction flexibility and localization precision. In this paper, we address this problem from two perspectives: the alignment of audio and text and the deep interaction among audio, text, and visual modalities. First, we propose a universal architecture, the Expression Prompt Collaboration Transformer, herein EPCFormer. Next, we propose an Expression Alignment (EA) mechanism for audio and text. The proposed EPCFormer exploits the fact that audio and text prompts referring to the same objects are semantically equivalent by using contrastive learning for both types of expressions. Then, to facilitate deep interactions among audio, text, and visual modalities, we introduce an Expression-Visual Attention (EVA) module. The knowledge of video object segmentation in terms of the expression prompts can seamlessly transfer between the two tasks by deeply exploring complementary cues between text and audio. Experiments on well-recognized benchmarks demonstrate that our EPCFormer attains state-of-the-art results on both tasks. The source code will be made publicly available at https://github.com/lab206/EPCFormer.

Results

TaskDatasetMetricValueModel
Instance SegmentationRefer-YouTube-VOS (2021 public validation)F67.2EPCFormer (ViT-H)
Instance SegmentationRefer-YouTube-VOS (2021 public validation)J62.9EPCFormer (ViT-H)
Instance SegmentationRefer-YouTube-VOS (2021 public validation)J&F65EPCFormer (ViT-H)
Referring Expression SegmentationRefer-YouTube-VOS (2021 public validation)F67.2EPCFormer (ViT-H)
Referring Expression SegmentationRefer-YouTube-VOS (2021 public validation)J62.9EPCFormer (ViT-H)
Referring Expression SegmentationRefer-YouTube-VOS (2021 public validation)J&F65EPCFormer (ViT-H)

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation2025-07-17Unleashing Vision Foundation Models for Coronary Artery Segmentation: Parallel ViT-CNN Encoding and Variational Fusion2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17