TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Efficient Large-scale Audio Tagging via Transformer-to-CNN...

Efficient Large-scale Audio Tagging via Transformer-to-CNN Knowledge Distillation

Florian Schmid, Khaled Koutini, Gerhard Widmer

2022-11-09Audio ClassificationAudio TaggingKnowledge Distillation
PaperPDFCode(official)Code

Abstract

Audio Spectrogram Transformer models rule the field of Audio Tagging, outrunning previously dominating Convolutional Neural Networks (CNNs). Their superiority is based on the ability to scale up and exploit large-scale datasets such as AudioSet. However, Transformers are demanding in terms of model size and computational requirements compared to CNNs. We propose a training procedure for efficient CNNs based on offline Knowledge Distillation (KD) from high-performing yet complex transformers. The proposed training schema and the efficient CNN design based on MobileNetV3 results in models outperforming previous solutions in terms of parameter and computational efficiency and prediction performance. We provide models of different complexity levels, scaling from low-complexity models up to a new state-of-the-art performance of .483 mAP on AudioSet. Source Code available at: https://github.com/fschmid56/EfficientAT

Results

TaskDatasetMetricValueModel
Audio ClassificationESC-50Accuracy (5-fold)97.45mn40_as
Audio ClassificationESC-50Top-1 Accuracy97.45mn40_as
Audio ClassificationAudioSetTest mAP0.498mn40_as (Ensemble)
Audio ClassificationAudioSetTest mAP0.483mn40_as (Single)
Audio TaggingAudioSetmean average precision0.498mn40_as (Ensemble)
Audio TaggingAudioSetmean average precision0.483mn40_as (Single)
ClassificationESC-50Accuracy (5-fold)97.45mn40_as
ClassificationESC-50Top-1 Accuracy97.45mn40_as
ClassificationAudioSetTest mAP0.498mn40_as (Ensemble)
ClassificationAudioSetTest mAP0.483mn40_as (Single)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14KAT-V1: Kwai-AutoThink Technical Report2025-07-11