TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/XKD: Cross-modal Knowledge Distillation with Domain Alignm...

XKD: Cross-modal Knowledge Distillation with Domain Alignment for Video Representation Learning

Pritam Sarkar, Ali Etemad

2022-11-25Self-Supervised Action Recognition LinearSound ClassificationAction ClassificationRepresentation LearningTransfer LearningClassificationKnowledge DistillationSelf-Supervised Action Recognition
PaperPDFCode(official)

Abstract

We present XKD, a novel self-supervised framework to learn meaningful representations from unlabelled videos. XKD is trained with two pseudo objectives. First, masked data reconstruction is performed to learn modality-specific representations from audio and visual streams. Next, self-supervised cross-modal knowledge distillation is performed between the two modalities through a teacher-student setup to learn complementary information. We introduce a novel domain alignment strategy to tackle domain discrepancy between audio and visual modalities enabling effective cross-modal knowledge distillation. Additionally, to develop a general-purpose network capable of handling both audio and visual streams, modality-agnostic variants of XKD are introduced, which use the same pretrained backbone for different audio and visual tasks. Our proposed cross-modal knowledge distillation improves video action classification by $8\%$ to $14\%$ on UCF101, HMDB51, and Kinetics400. Additionally, XKD improves multimodal action classification by $5.5\%$ on Kinetics-Sound. XKD shows state-of-the-art performance in sound classification on ESC50, achieving top-1 accuracy of $96.5\%$.

Results

TaskDatasetMetricValueModel
Activity RecognitionKinetics-400Top-1 accuracy %77.6XKD (ViT-B/112/16)
Activity RecognitionKinetics-400Top-5 Accuracy %92.9XKD (ViT-B/112/16)
Activity RecognitionUCF1013-fold Accuracy94.1XKD (ViT-B/112/16)
Activity RecognitionUCF1013-fold Accuracy93.4XKD-Modality-Agnostic (ViT-B/112/16)
Activity RecognitionHMDB51Top-1 Accuracy69XKD (ViT-B/112/16)
Activity RecognitionHMDB51Top-1 Accuracy65.9XKD-Modality-Agnostic (ViT-B/112/16)
Action RecognitionKinetics-400Top-1 accuracy %77.6XKD (ViT-B/112/16)
Action RecognitionKinetics-400Top-5 Accuracy %92.9XKD (ViT-B/112/16)
Action RecognitionUCF1013-fold Accuracy94.1XKD (ViT-B/112/16)
Action RecognitionUCF1013-fold Accuracy93.4XKD-Modality-Agnostic (ViT-B/112/16)
Action RecognitionHMDB51Top-1 Accuracy69XKD (ViT-B/112/16)
Action RecognitionHMDB51Top-1 Accuracy65.9XKD-Modality-Agnostic (ViT-B/112/16)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17