TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DnS: Distill-and-Select for Efficient and Accurate Video I...

DnS: Distill-and-Select for Efficient and Accurate Video Indexing and Retrieval

Giorgos Kordopatis-Zilos, Christos Tzelepis, Symeon Papadopoulos, Ioannis Kompatsiaris, Ioannis Patras

2021-06-24Video RetrievalRetrievalKnowledge Distillation
PaperPDFCode(official)

Abstract

In this paper, we address the problem of high performance and computationally efficient content-based video retrieval in large-scale datasets. Current methods typically propose either: (i) fine-grained approaches employing spatio-temporal representations and similarity calculations, achieving high performance at a high computational cost or (ii) coarse-grained approaches representing/indexing videos as global vectors, where the spatio-temporal structure is lost, providing low performance but also having low computational cost. In this work, we propose a Knowledge Distillation framework, called Distill-and-Select (DnS), that starting from a well-performing fine-grained Teacher Network learns: a) Student Networks at different retrieval performance and computational efficiency trade-offs and b) a Selector Network that at test time rapidly directs samples to the appropriate student to maintain both high retrieval performance and high computational efficiency. We train several students with different architectures and arrive at different trade-offs of performance and efficiency, i.e., speed and storage requirements, including fine-grained students that store/index videos using binary representations. Importantly, the proposed scheme allows Knowledge Distillation in large, unlabelled datasets -- this leads to good students. We evaluate DnS on five public datasets on three different video retrieval tasks and demonstrate a) that our students achieve state-of-the-art performance in several cases and b) that the DnS framework provides an excellent trade-off between retrieval performance, computational speed, and storage space. In specific configurations, the proposed method achieves similar mAP with the teacher but is 20 times faster and requires 240 times less storage space. The collected dataset and implementation are publicly available: https://github.com/mever-team/distill-and-select.

Results

TaskDatasetMetricValueModel
VideoFIVR-200KmAP (DSVR)0.921DnS (S^f_A)
VideoFIVR-200KmAP (ISVR)0.741DnS (S^f_A)
VideoFIVR-200KmAP (CSVR)0.863DnS (S^f_B)
VideoFIVR-200KmAP (DSVR)0.909DnS (S^f_B)
VideoFIVR-200KmAP (ISVR)0.729DnS (S^f_B)
VideoFIVR-200KmAP (CSVR)0.558DnS (S^c)
VideoFIVR-200KmAP (DSVR)0.574DnS (S^c)
VideoFIVR-200KmAP (ISVR)0.476DnS (S^c)
Video RetrievalFIVR-200KmAP (DSVR)0.921DnS (S^f_A)
Video RetrievalFIVR-200KmAP (ISVR)0.741DnS (S^f_A)
Video RetrievalFIVR-200KmAP (CSVR)0.863DnS (S^f_B)
Video RetrievalFIVR-200KmAP (DSVR)0.909DnS (S^f_B)
Video RetrievalFIVR-200KmAP (ISVR)0.729DnS (S^f_B)
Video RetrievalFIVR-200KmAP (CSVR)0.558DnS (S^c)
Video RetrievalFIVR-200KmAP (DSVR)0.574DnS (S^c)
Video RetrievalFIVR-200KmAP (ISVR)0.476DnS (S^c)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16