Hai Nguyen-Truong, E-Ro Nguyen, Tuan-Anh Vu, Minh-Triet Tran, Binh-Son Hua, Sai-Kit Yeung
Referring image segmentation is a challenging task that involves generating pixel-wise segmentation masks based on natural language descriptions. The complexity of this task increases with the intricacy of the sentences provided. Existing methods have relied mostly on visual features to generate the segmentation masks while treating text features as supporting components. However, this under-utilization of text understanding limits the model's capability to fully comprehend the given expressions. In this work, we propose a novel framework that specifically emphasizes object and context comprehension inspired by human cognitive processes through Vision-Aware Text Features. Firstly, we introduce a CLIP Prior module to localize the main object of interest and embed the object heatmap into the query initialization process. Secondly, we propose a combination of two components: Contextual Multimodal Decoder and Meaning Consistency Constraint, to further enhance the coherent and consistent interpretation of language cues with the contextual understanding obtained from the image. Our method achieves significant performance improvements on three benchmark datasets RefCOCO, RefCOCO+ and G-Ref. Project page: \url{https://vatex.hkustvgd.com/}.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | Refer-YouTube-VOS | F | 67.5 | VATEX |
| Video | Refer-YouTube-VOS | J | 63.3 | VATEX |
| Video | Refer-YouTube-VOS | J&F | 65.4 | VATEX |
| Instance Segmentation | RefCOCO testA | mIoU | 79.64 | VATEX |
| Instance Segmentation | RefCoCo val | mIoU | 78.16 | VATEX |
| Instance Segmentation | RefCOCO testB | mIoU | 75.64 | VATEX |
| Instance Segmentation | RefCOCOg-test | mIoU | 70.58 | VATEX |
| Instance Segmentation | RefCOCO+ val | Mean IoU | 70.02 | VATEX |
| Instance Segmentation | RefCOCO+ test B | mIoU | 62.52 | VATEX |
| Instance Segmentation | DAVIS 2017 (val) | J&F score | 65.4 | VATEX |
| Instance Segmentation | RefCOCO+ testA | mIoU | 74.41 | VATEX |
| Instance Segmentation | RefCOCOg-val | IoU | 0.7554 | VATEX |
| Instance Segmentation | RefCOCOg-val | mIoU | 69.73 | VATEX |
| Video Object Segmentation | Refer-YouTube-VOS | F | 67.5 | VATEX |
| Video Object Segmentation | Refer-YouTube-VOS | J | 63.3 | VATEX |
| Video Object Segmentation | Refer-YouTube-VOS | J&F | 65.4 | VATEX |
| Referring Expression Segmentation | RefCOCO testA | mIoU | 79.64 | VATEX |
| Referring Expression Segmentation | RefCoCo val | mIoU | 78.16 | VATEX |
| Referring Expression Segmentation | RefCOCO testB | mIoU | 75.64 | VATEX |
| Referring Expression Segmentation | RefCOCOg-test | mIoU | 70.58 | VATEX |
| Referring Expression Segmentation | RefCOCO+ val | Mean IoU | 70.02 | VATEX |
| Referring Expression Segmentation | RefCOCO+ test B | mIoU | 62.52 | VATEX |
| Referring Expression Segmentation | DAVIS 2017 (val) | J&F score | 65.4 | VATEX |
| Referring Expression Segmentation | RefCOCO+ testA | mIoU | 74.41 | VATEX |
| Referring Expression Segmentation | RefCOCOg-val | IoU | 0.7554 | VATEX |
| Referring Expression Segmentation | RefCOCOg-val | mIoU | 69.73 | VATEX |