TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Dynamic Convolutional Neural Networks as Efficient Pre-tra...

Dynamic Convolutional Neural Networks as Efficient Pre-trained Audio Models

Florian Schmid, Khaled Koutini, Gerhard Widmer

2023-10-24Audio ClassificationAudio TaggingKnowledge DistillationInstrument Recognition
PaperPDFCode(official)

Abstract

The introduction of large-scale audio datasets, such as AudioSet, paved the way for Transformers to conquer the audio domain and replace CNNs as the state-of-the-art neural network architecture for many tasks. Audio Spectrogram Transformers are excellent at exploiting large datasets, creating powerful pre-trained models that surpass CNNs when fine-tuned on downstream tasks. However, current popular Audio Spectrogram Transformers are demanding in terms of computational complexity compared to CNNs. Recently, we have shown that, by employing Transformer-to-CNN Knowledge Distillation, efficient CNNs can catch up with and even outperform Transformers on large datasets. In this work, we extend this line of research and increase the capacity of efficient CNNs by introducing dynamic CNN blocks, constructed of dynamic non-linearities, dynamic convolutions and attention mechanisms. We show that these dynamic CNNs outperform traditional efficient CNNs, in terms of the performance-complexity trade-off and parameter efficiency, at the task of audio tagging on the large-scale AudioSet. Our experiments further indicate that the introduced dynamic CNNs achieve better performance on downstream tasks and scale up well, attaining Transformer performance and even outperforming them on AudioSet and several downstream tasks.

Results

TaskDatasetMetricValueModel
Audio ClassificationESC-50Accuracy (5-fold)97.4DyMN-L
Audio ClassificationESC-50Top-1 Accuracy97.4DyMN-L
Audio ClassificationFSD50KmAP65.6MN
Audio ClassificationFSD50KmAP65.5DyMN-L
Audio ClassificationAudioSetTest mAP0.49DyMN-L (Audio-Only, Single)
Audio TaggingAudioSetmean average precision0.49DyMN-L (Audio-Only, Single)
ClassificationESC-50Accuracy (5-fold)97.4DyMN-L
ClassificationESC-50Top-1 Accuracy97.4DyMN-L
ClassificationFSD50KmAP65.6MN
ClassificationFSD50KmAP65.5DyMN-L
ClassificationAudioSetTest mAP0.49DyMN-L (Audio-Only, Single)
Instrument RecognitionOpenMIC-2018mean average precision0.855DyMN-L

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14KAT-V1: Kwai-AutoThink Technical Report2025-07-11