TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Object-aware Video-language Pre-training for Retrieval

Object-aware Video-language Pre-training for Retrieval

Alex Jinpeng Wang, Yixiao Ge, Guanyu Cai, Rui Yan, Xudong Lin, Ying Shan, XiaoHu Qie, Mike Zheng Shou

2021-12-01CVPR 2022 1Text MatchingZero-Shot Video RetrievalRetrieval
PaperPDFCode(official)

Abstract

Recently, by introducing large-scale dataset and strong transformer network, video-language pre-training has shown great success especially for retrieval. Yet, existing video-language transformer models do not explicitly fine-grained semantic align. In this work, we present Object-aware Transformers, an object-centric approach that extends video-language transformer to incorporate object representations. The key idea is to leverage the bounding boxes and object tags to guide the training process. We evaluate our model on three standard sub-tasks of video-text matching on four widely used benchmarks. We also provide deep analysis and detailed ablation about the proposed method. We show clear improvement in performance across all tasks and datasets considered, demonstrating the value of a model that incorporates object representations into a video-language architecture. The code will be released at \url{https://github.com/FingerRec/OA-Transformer}.

Results

TaskDatasetMetricValueModel
Zero-Shot Video RetrievalMSR-VTTtext-to-video Median Rank8OA-Trans
Zero-Shot Video RetrievalMSR-VTTtext-to-video R@123.4OA-Trans
Zero-Shot Video RetrievalMSR-VTTtext-to-video R@1055.6OA-Trans
Zero-Shot Video RetrievalMSR-VTTtext-to-video R@547.5OA-Trans
Zero-Shot Video RetrievalDiDeMotext-to-video Median Rank6OA-Trans
Zero-Shot Video RetrievalDiDeMotext-to-video R@123.5OA-Trans
Zero-Shot Video RetrievalDiDeMotext-to-video R@1059.8OA-Trans
Zero-Shot Video RetrievalDiDeMotext-to-video R@550.4OA-Trans

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16Context-Aware Search and Retrieval Over Erasure Channels2025-07-16Seq vs Seq: An Open Suite of Paired Encoders and Decoders2025-07-15