Yimu Wang, Peng Shi
While recent progress in video-text retrieval has been advanced by the exploration of better representation learning, in this paper, we present a novel multi-grained sparse learning framework, S3MA, to learn an aligned sparse space shared between the video and the text for video-text retrieval. The shared sparse space is initialized with a finite number of sparse concepts, each of which refers to a number of words. With the text data at hand, we learn and update the shared sparse space in a supervised manner using the proposed similarity and alignment losses. Moreover, to enable multi-grained alignment, we incorporate frame representations for better modeling the video modality and calculating fine-grained and coarse-grained similarities. Benefiting from the learned shared sparse space and multi-grained similarities, extensive experiments on several video-text retrieval benchmarks demonstrate the superiority of S3MA over existing methods. Our code is available at https://github.com/yimuwangcs/Better_Cross_Modal_Retrieval.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | MSR-VTT-1kA | text-to-video R@1 | 49.8 | SuMA (ViT-B/16) |
| Video | MSR-VTT-1kA | text-to-video R@10 | 83.9 | SuMA (ViT-B/16) |
| Video | MSR-VTT-1kA | text-to-video R@5 | 75.1 | SuMA (ViT-B/16) |
| Video | MSR-VTT-1kA | video-to-text R@1 | 47.3 | SuMA (ViT-B/16) |
| Video | MSR-VTT-1kA | video-to-text R@10 | 84.3 | SuMA (ViT-B/16) |
| Video | MSR-VTT-1kA | video-to-text R@5 | 76 | SuMA (ViT-B/16) |
| Video Retrieval | MSR-VTT-1kA | text-to-video R@1 | 49.8 | SuMA (ViT-B/16) |
| Video Retrieval | MSR-VTT-1kA | text-to-video R@10 | 83.9 | SuMA (ViT-B/16) |
| Video Retrieval | MSR-VTT-1kA | text-to-video R@5 | 75.1 | SuMA (ViT-B/16) |
| Video Retrieval | MSR-VTT-1kA | video-to-text R@1 | 47.3 | SuMA (ViT-B/16) |
| Video Retrieval | MSR-VTT-1kA | video-to-text R@10 | 84.3 | SuMA (ViT-B/16) |
| Video Retrieval | MSR-VTT-1kA | video-to-text R@5 | 76 | SuMA (ViT-B/16) |