TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/RoME: Role-aware Mixture-of-Expert Transformer for Text-to...

RoME: Role-aware Mixture-of-Expert Transformer for Text-to-Video Retrieval

Burak Satar, Hongyuan Zhu, Hanwang Zhang, Joo Hwee Lim

2022-06-26Video RetrievalText to Video RetrievalRetrieval
PaperPDFCode(official)

Abstract

Seas of videos are uploaded daily with the popularity of social channels; thus, retrieving the most related video contents with user textual queries plays a more crucial role. Most methods consider only one joint embedding space between global visual and textual features without considering the local structures of each modality. Some other approaches consider multiple embedding spaces consisting of global and local features separately, ignoring rich inter-modality correlations. We propose a novel mixture-of-expert transformer RoME that disentangles the text and the video into three levels; the roles of spatial contexts, temporal contexts, and object contexts. We utilize a transformer-based attention mechanism to fully exploit visual and text embeddings at both global and local levels with mixture-of-experts for considering inter-modalities and structures' correlations. The results indicate that our method outperforms the state-of-the-art methods on the YouCook2 and MSR-VTT datasets, given the same visual backbone without pre-training. Finally, we conducted extensive ablation studies to elucidate our design choices.

Results

TaskDatasetMetricValueModel
VideoYouCook2text-to-video Median Rank53RoME
VideoYouCook2text-to-video R@16.3RoME
VideoYouCook2text-to-video R@1025.2RoME
VideoYouCook2text-to-video R@516.9RoME
VideoMSR-VTTtext-to-video Median Rank17RoME
VideoMSR-VTTtext-to-video R@110.7RoME
VideoMSR-VTTtext-to-video R@1041.2RoME
VideoMSR-VTTtext-to-video R@529.6RoME
Video RetrievalYouCook2text-to-video Median Rank53RoME
Video RetrievalYouCook2text-to-video R@16.3RoME
Video RetrievalYouCook2text-to-video R@1025.2RoME
Video RetrievalYouCook2text-to-video R@516.9RoME
Video RetrievalMSR-VTTtext-to-video Median Rank17RoME
Video RetrievalMSR-VTTtext-to-video R@110.7RoME
Video RetrievalMSR-VTTtext-to-video R@1041.2RoME
Video RetrievalMSR-VTTtext-to-video R@529.6RoME

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16Context-Aware Search and Retrieval Over Erasure Channels2025-07-16Seq vs Seq: An Open Suite of Paired Encoders and Decoders2025-07-15