TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/GLUS: Global-Local Reasoning Unified into A Single Large L...

GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmentation

Lang Lin, Xueyang Yu, Ziqi Pang, Yu-Xiong Wang

2025-04-10CVPR 2025 1Referring Video Object SegmentationSemantic SegmentationVideo SegmentationVideo Object SegmentationObject TrackingLarge Language ModelContrastive LearningVideo Semantic SegmentationLanguage Modelling
PaperPDFCode

Abstract

This paper proposes a novel framework utilizing multi-modal large language models (MLLMs) for referring video object segmentation (RefVOS). Previous MLLM-based methods commonly struggle with the dilemma between "Ref" and "VOS": they either specialize in understanding a few key frames (global reasoning) or tracking objects on continuous frames (local reasoning), and rely on external VOS or frame selectors to mitigate the other end of the challenge. However, our framework GLUS shows that global and local consistency can be unified into a single video segmentation MLLM: a set of sparse "context frames" provides global information, while a stream of continuous "query frames" conducts local object tracking. This is further supported by jointly training the MLLM with a pre-trained VOS memory bank to simultaneously digest short-range and long-range temporal information. To improve the information efficiency within the limited context window of MLLMs, we introduce object contrastive learning to distinguish hard false-positive objects and a self-refined framework to identify crucial frames and perform propagation. By collectively integrating these insights, our GLUS delivers a simple yet effective baseline, achieving new state-of-the-art for MLLMs on the MeViS and Ref-Youtube-VOS benchmark. Our project page is at https://glus-video.github.io/.

Results

TaskDatasetMetricValueModel
VideoMeViSF54.2GLUS
VideoMeViSJ48.5GLUS
VideoMeViSJ&F51.3GLUS
VideoLong-RVOSJ&F36.6GLUS
VideoLong-RVOStIoU68.4GLUS
VideoLong-RVOSvIoU34.6GLUS
Video Object SegmentationMeViSF54.2GLUS
Video Object SegmentationMeViSJ48.5GLUS
Video Object SegmentationMeViSJ&F51.3GLUS
Video Object SegmentationLong-RVOSJ&F36.6GLUS
Video Object SegmentationLong-RVOStIoU68.4GLUS
Video Object SegmentationLong-RVOSvIoU34.6GLUS

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21DENSE: Longitudinal Progress Note Generation with Temporal Modeling of Heterogeneous Clinical Notes Across Hospital Visits2025-07-18DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation2025-07-17Unified Medical Image Segmentation with State Space Modeling Snake2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17MVA 2025 Small Multi-Object Tracking for Spotting Birds Challenge: Dataset, Methods, and Results2025-07-17