TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Computer Vision/Video/YouCook2

Video on YouCook2

Metric: text-to-video R@5 (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕text-to-video R@5▼Extra DataPaperDate↕Code
1VAST74.3YesVAST: A Vision-Audio-Subtitle-Text Omni-Modality...2023-05-29Code
2MDMMT-264YesMDMMT-2: Multidomain Multimodal Transformer for ...2022-03-14-
3UniVL + MELTR63.1NoMELTR: Meta Loss Transformer for Learning to Fin...2023-03-23Code
4VideoCLIP62.6YesVideoCLIP: Contrastive Pre-training for Zero-sho...2021-09-28Code
5TACo59.7YesTACo: Token-aware Cascade Contrastive Learning f...2021-08-23-
6UniVL57.6YesUniVL: A Unified Video and Language Pre-Training...2020-02-15Code
7VLM56.88YesVLM: Task-agnostic Video-Language Model Pre-trai...2021-05-20Code
8VideoCLIP (zero-shot)50.4YesVideoCLIP: Contrastive Pre-training for Zero-sho...2021-09-28Code
9VideoCoCa (zero-shot)43.9NoVideoCoCa: Video-Text Modeling with Zero-Shot Tr...2022-12-09-
10Text-Video Embedding24.5NoHowTo100M: Learning a Text-Video Embedding by Wa...2019-06-07Code
11RoME16.9NoRoME: Role-aware Mixture-of-Expert Transformer f...2022-06-26Code
12Satar et al.14.5NoSemantic Role Aware Correlation Transformer for ...2022-06-26Code
13HGLMM FV CCA14.3No---