Adam Botach, Evgenii Zheltonozhskii, Chaim Baskin
The referring video object segmentation task (RVOS) involves segmentation of a text-referred object instance in the frames of a given video. Due to the complex nature of this multimodal task, which combines text reasoning, video understanding, instance segmentation and tracking, existing approaches typically rely on sophisticated pipelines in order to tackle it. In this paper, we propose a simple Transformer-based approach to RVOS. Our framework, termed Multimodal Tracking Transformer (MTTR), models the RVOS task as a sequence prediction problem. Following recent advancements in computer vision and natural language processing, MTTR is based on the realization that video and text can be processed together effectively and elegantly by a single multimodal Transformer model. MTTR is end-to-end trainable, free of text-related inductive bias components and requires no additional mask-refinement post-processing steps. As such, it simplifies the RVOS pipeline considerably compared to existing methods. Evaluation on standard benchmarks reveals that MTTR significantly outperforms previous art across multiple metrics. In particular, MTTR shows impressive +5.7 and +5.0 mAP gains on the A2D-Sentences and JHMDB-Sentences datasets respectively, while processing 76 frames per second. In addition, we report strong results on the public validation set of Refer-YouTube-VOS, a more challenging RVOS dataset that has yet to receive the attention of researchers. The code to reproduce our experiments is available at https://github.com/mttr2021/MTTR
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | ReVOS | F | 25.9 | MTTR (Video-Swin-T) |
| Video | ReVOS | J | 25.1 | MTTR (Video-Swin-T) |
| Video | ReVOS | J&F | 25.5 | MTTR (Video-Swin-T) |
| Video | ReVOS | R | 5.6 | MTTR (Video-Swin-T) |
| Video | MeViS | F | 31.2 | MTTR |
| Video | MeViS | J | 28.8 | MTTR |
| Video | MeViS | J&F | 30 | MTTR |
| Instance Segmentation | Refer-YouTube-VOS (2021 public validation) | F | 56.64 | MTTR (w=12) |
| Instance Segmentation | Refer-YouTube-VOS (2021 public validation) | J | 54 | MTTR (w=12) |
| Instance Segmentation | Refer-YouTube-VOS (2021 public validation) | J&F | 55.32 | MTTR (w=12) |
| Instance Segmentation | A2D Sentences | AP | 0.461 | MTTR (w=10) |
| Instance Segmentation | A2D Sentences | IoU mean | 0.64 | MTTR (w=10) |
| Instance Segmentation | A2D Sentences | IoU overall | 0.72 | MTTR (w=10) |
| Instance Segmentation | A2D Sentences | Precision@0.5 | 0.754 | MTTR (w=10) |
| Instance Segmentation | A2D Sentences | Precision@0.6 | 0.712 | MTTR (w=10) |
| Instance Segmentation | A2D Sentences | Precision@0.7 | 0.638 | MTTR (w=10) |
| Instance Segmentation | A2D Sentences | Precision@0.8 | 0.485 | MTTR (w=10) |
| Instance Segmentation | A2D Sentences | Precision@0.9 | 0.169 | MTTR (w=10) |
| Instance Segmentation | A2D Sentences | AP | 0.447 | MTTR (w=8) |
| Instance Segmentation | A2D Sentences | IoU mean | 0.618 | MTTR (w=8) |
| Instance Segmentation | A2D Sentences | IoU overall | 0.702 | MTTR (w=8) |
| Instance Segmentation | A2D Sentences | Precision@0.5 | 0.721 | MTTR (w=8) |
| Instance Segmentation | A2D Sentences | Precision@0.6 | 0.684 | MTTR (w=8) |
| Instance Segmentation | A2D Sentences | Precision@0.7 | 0.607 | MTTR (w=8) |
| Instance Segmentation | A2D Sentences | Precision@0.8 | 0.456 | MTTR (w=8) |
| Instance Segmentation | A2D Sentences | Precision@0.9 | 0.164 | MTTR (w=8) |
| Instance Segmentation | J-HMDB | AP | 0.392 | MTTR (w=10) |
| Instance Segmentation | J-HMDB | IoU mean | 0.698 | MTTR (w=10) |
| Instance Segmentation | J-HMDB | IoU overall | 0.701 | MTTR (w=10) |
| Instance Segmentation | J-HMDB | Precision@0.5 | 0.939 | MTTR (w=10) |
| Instance Segmentation | J-HMDB | Precision@0.6 | 0.852 | MTTR (w=10) |
| Instance Segmentation | J-HMDB | Precision@0.7 | 0.616 | MTTR (w=10) |
| Instance Segmentation | J-HMDB | Precision@0.8 | 0.166 | MTTR (w=10) |
| Instance Segmentation | J-HMDB | Precision@0.9 | 0.001 | MTTR (w=10) |
| Instance Segmentation | J-HMDB | AP | 0.366 | MTTR (w=8) |
| Instance Segmentation | J-HMDB | IoU mean | 0.679 | MTTR (w=8) |
| Instance Segmentation | J-HMDB | IoU overall | 0.674 | MTTR (w=8) |
| Instance Segmentation | J-HMDB | Precision@0.5 | 0.91 | MTTR (w=8) |
| Instance Segmentation | J-HMDB | Precision@0.6 | 0.815 | MTTR (w=8) |
| Instance Segmentation | J-HMDB | Precision@0.7 | 0.57 | MTTR (w=8) |
| Instance Segmentation | J-HMDB | Precision@0.8 | 0.144 | MTTR (w=8) |
| Instance Segmentation | J-HMDB | Precision@0.9 | 0.001 | MTTR (w=8) |
| Video Object Segmentation | ReVOS | F | 25.9 | MTTR (Video-Swin-T) |
| Video Object Segmentation | ReVOS | J | 25.1 | MTTR (Video-Swin-T) |
| Video Object Segmentation | ReVOS | J&F | 25.5 | MTTR (Video-Swin-T) |
| Video Object Segmentation | ReVOS | R | 5.6 | MTTR (Video-Swin-T) |
| Video Object Segmentation | MeViS | F | 31.2 | MTTR |
| Video Object Segmentation | MeViS | J | 28.8 | MTTR |
| Video Object Segmentation | MeViS | J&F | 30 | MTTR |
| Referring Expression Segmentation | Refer-YouTube-VOS (2021 public validation) | F | 56.64 | MTTR (w=12) |
| Referring Expression Segmentation | Refer-YouTube-VOS (2021 public validation) | J | 54 | MTTR (w=12) |
| Referring Expression Segmentation | Refer-YouTube-VOS (2021 public validation) | J&F | 55.32 | MTTR (w=12) |
| Referring Expression Segmentation | A2D Sentences | AP | 0.461 | MTTR (w=10) |
| Referring Expression Segmentation | A2D Sentences | IoU mean | 0.64 | MTTR (w=10) |
| Referring Expression Segmentation | A2D Sentences | IoU overall | 0.72 | MTTR (w=10) |
| Referring Expression Segmentation | A2D Sentences | Precision@0.5 | 0.754 | MTTR (w=10) |
| Referring Expression Segmentation | A2D Sentences | Precision@0.6 | 0.712 | MTTR (w=10) |
| Referring Expression Segmentation | A2D Sentences | Precision@0.7 | 0.638 | MTTR (w=10) |
| Referring Expression Segmentation | A2D Sentences | Precision@0.8 | 0.485 | MTTR (w=10) |
| Referring Expression Segmentation | A2D Sentences | Precision@0.9 | 0.169 | MTTR (w=10) |
| Referring Expression Segmentation | A2D Sentences | AP | 0.447 | MTTR (w=8) |
| Referring Expression Segmentation | A2D Sentences | IoU mean | 0.618 | MTTR (w=8) |
| Referring Expression Segmentation | A2D Sentences | IoU overall | 0.702 | MTTR (w=8) |
| Referring Expression Segmentation | A2D Sentences | Precision@0.5 | 0.721 | MTTR (w=8) |
| Referring Expression Segmentation | A2D Sentences | Precision@0.6 | 0.684 | MTTR (w=8) |
| Referring Expression Segmentation | A2D Sentences | Precision@0.7 | 0.607 | MTTR (w=8) |
| Referring Expression Segmentation | A2D Sentences | Precision@0.8 | 0.456 | MTTR (w=8) |
| Referring Expression Segmentation | A2D Sentences | Precision@0.9 | 0.164 | MTTR (w=8) |
| Referring Expression Segmentation | J-HMDB | AP | 0.392 | MTTR (w=10) |
| Referring Expression Segmentation | J-HMDB | IoU mean | 0.698 | MTTR (w=10) |
| Referring Expression Segmentation | J-HMDB | IoU overall | 0.701 | MTTR (w=10) |
| Referring Expression Segmentation | J-HMDB | Precision@0.5 | 0.939 | MTTR (w=10) |
| Referring Expression Segmentation | J-HMDB | Precision@0.6 | 0.852 | MTTR (w=10) |
| Referring Expression Segmentation | J-HMDB | Precision@0.7 | 0.616 | MTTR (w=10) |
| Referring Expression Segmentation | J-HMDB | Precision@0.8 | 0.166 | MTTR (w=10) |
| Referring Expression Segmentation | J-HMDB | Precision@0.9 | 0.001 | MTTR (w=10) |
| Referring Expression Segmentation | J-HMDB | AP | 0.366 | MTTR (w=8) |
| Referring Expression Segmentation | J-HMDB | IoU mean | 0.679 | MTTR (w=8) |
| Referring Expression Segmentation | J-HMDB | IoU overall | 0.674 | MTTR (w=8) |
| Referring Expression Segmentation | J-HMDB | Precision@0.5 | 0.91 | MTTR (w=8) |
| Referring Expression Segmentation | J-HMDB | Precision@0.6 | 0.815 | MTTR (w=8) |
| Referring Expression Segmentation | J-HMDB | Precision@0.7 | 0.57 | MTTR (w=8) |
| Referring Expression Segmentation | J-HMDB | Precision@0.8 | 0.144 | MTTR (w=8) |
| Referring Expression Segmentation | J-HMDB | Precision@0.9 | 0.001 | MTTR (w=8) |