TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Audio-visual Generalised Zero-shot Learning with Cross-mod...

Audio-visual Generalised Zero-shot Learning with Cross-modal Attention and Language

Otniel-Bogdan Mercea, Lukas Riesch, A. Sophia Koepke, Zeynep Akata

2022-03-07CVPR 2022 1ZSL Video ClassificationGZSL Video ClassificationZero-Shot Learning
PaperPDFCode(official)

Abstract

Learning to classify video data from classes not included in the training data, i.e. video-based zero-shot learning, is challenging. We conjecture that the natural alignment between the audio and visual modalities in video data provides a rich training signal for learning discriminative multi-modal representations. Focusing on the relatively underexplored task of audio-visual zero-shot learning, we propose to learn multi-modal representations from audio-visual data using cross-modal attention and exploit textual label embeddings for transferring knowledge from seen classes to unseen classes. Taking this one step further, in our generalised audio-visual zero-shot learning setting, we include all the training classes in the test-time search space which act as distractors and increase the difficulty while making the setting more realistic. Due to the lack of a unified benchmark in this domain, we introduce a (generalised) zero-shot learning benchmark on three audio-visual datasets of varying sizes and difficulty, VGGSound, UCF, and ActivityNet, ensuring that the unseen test classes do not appear in the dataset used for supervised training of the backbone deep models. Comparing multiple relevant and recent methods, we demonstrate that our proposed AVCA model achieves state-of-the-art performance on all three datasets. Code and data are available at \url{https://github.com/ExplainableML/AVCA-GZSL}.

Results

TaskDatasetMetricValueModel
Zero-Shot LearningActivityNet-GZSL(main)HM12.13AVCA
Zero-Shot LearningActivityNet-GZSL(main)ZSL9.13AVCA
Zero-Shot LearningVGGSound-GZSL (cls)HM8.31AVCA
Zero-Shot LearningVGGSound-GZSL (cls)ZSL6.91AVCA
Zero-Shot LearningActivityNet-GZSL (cls)HM9.92AVCA
Zero-Shot LearningVGGSound-GZSL(main)HM6.31AVCA
Zero-Shot LearningVGGSound-GZSL(main)ZSL6AVCA
Zero-Shot LearningUCF-GZSL (cls)HM41.34AVCA
Zero-Shot LearningUCF-GZSL (cls)ZSL37.72AVCA
Zero-Shot LearningUCF-GZSL(main)HM27.15AVCA
Zero-Shot LearningUCF-GZSL(main)ZSL20AVCA

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17DEARLi: Decoupled Enhancement of Recognition and Localization for Semi-supervised Panoptic Segmentation2025-07-14EVA: Mixture-of-Experts Semantic Variant Alignment for Compositional Zero-Shot Learning2025-06-26Zero-Shot Learning for Obsolescence Risk Forecasting2025-06-26SEZ-HARN: Self-Explainable Zero-shot Human Activity Recognition Network2025-06-25A Multi-Scale Spatial Attention-Based Zero-Shot Learning Framework for Low-Light Image Enhancement2025-06-23Generalizable Agent Modeling for Agent Collaboration-Competition Adaptation with Multi-Retrieval and Dynamic Generation2025-06-20AnyTraverse: An off-road traversability framework with VLM and human operator in the loop2025-06-20