TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Computer Vision/Video Retrieval/YouCook2

Video Retrieval on YouCook2

Metric: text-to-video R@1 (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕text-to-video R@1▼Extra DataPaperDate↕Code
1VAST50.4YesVAST: A Vision-Audio-Subtitle-Text Omni-Modality...2023-05-29Code
2UniVL + MELTR33.7NoMELTR: Meta Loss Transformer for Learning to Fin...2023-03-23Code
3VideoCLIP32.2YesVideoCLIP: Contrastive Pre-training for Zero-sho...2021-09-28Code
4MDMMT-232YesMDMMT-2: Multidomain Multimodal Transformer for ...2022-03-14-
5TACo29.6YesTACo: Token-aware Cascade Contrastive Learning f...2021-08-23-
6UniVL28.9YesUniVL: A Unified Video and Language Pre-Training...2020-02-15Code
7VLM27.05YesVLM: Task-agnostic Video-Language Model Pre-trai...2021-05-20Code
8VideoCLIP (zero-shot)22.7YesVideoCLIP: Contrastive Pre-training for Zero-sho...2021-09-28Code
9VideoCoCa (zero-shot)21.7NoVideoCoCa: Video-Text Modeling with Zero-Shot Tr...2022-12-09-
10COOT16.7NoCOOT: Cooperative Hierarchical Transformer for V...2020-11-01Code
11Text-Video Embedding8.2NoHowTo100M: Learning a Text-Video Embedding by Wa...2019-06-07Code
12RoME6.3NoRoME: Role-aware Mixture-of-Expert Transformer f...2022-06-26Code
13Satar et al.5.3NoSemantic Role Aware Correlation Transformer for ...2022-06-26Code
14HGLMM FV CCA4.6No---