Burak Satar, Hongyuan Zhu, Xavier Bresson, Joo Hwee Lim
With the emergence of social media, voluminous video clips are uploaded every day, and retrieving the most relevant visual content with a language query becomes critical. Most approaches aim to learn a joint embedding space for plain textual and visual contents without adequately exploiting their intra-modality structures and inter-modality correlations. This paper proposes a novel transformer that explicitly disentangles the text and video into semantic roles of objects, spatial contexts and temporal contexts with an attention scheme to learn the intra- and inter-role correlations among the three roles to discover discriminative features for matching at different levels. The preliminary results on popular YouCook2 indicate that our approach surpasses a current state-of-the-art method, with a high margin in all metrics. It also overpasses two SOTA methods in terms of two metrics.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Video | YouCook2 | text-to-video Median Rank | 77 | Satar et al. |
| Video | YouCook2 | text-to-video R@1 | 5.3 | Satar et al. |
| Video | YouCook2 | text-to-video R@10 | 20.8 | Satar et al. |
| Video | YouCook2 | text-to-video R@5 | 14.5 | Satar et al. |
| Video Retrieval | YouCook2 | text-to-video Median Rank | 77 | Satar et al. |
| Video Retrieval | YouCook2 | text-to-video R@1 | 5.3 | Satar et al. |
| Video Retrieval | YouCook2 | text-to-video R@10 | 20.8 | Satar et al. |
| Video Retrieval | YouCook2 | text-to-video R@5 | 14.5 | Satar et al. |