TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SSAST: Self-Supervised Audio Spectrogram Transformer

SSAST: Self-Supervised Audio Spectrogram Transformer

Yuan Gong, Cheng-I Jeff Lai, Yu-An Chung, James Glass

2021-10-19Keyword SpottingSpeaker IdentificationAudio ClassificationSelf-Supervised LearningClassificationEmotion Recognition
PaperPDFCodeCode(official)Code

Abstract

Recently, neural networks based purely on self-attention, such as the Vision Transformer (ViT), have been shown to outperform deep learning models constructed with convolutional neural networks (CNNs) on various vision tasks, thus extending the success of Transformers, which were originally developed for language processing, to the vision domain. A recent study showed that a similar methodology can also be applied to the audio domain. Specifically, the Audio Spectrogram Transformer (AST) achieves state-of-the-art results on various audio classification benchmarks. However, pure Transformer models tend to require more training data compared to CNNs, and the success of the AST relies on supervised pretraining that requires a large amount of labeled data and a complex training pipeline, thus limiting the practical usage of AST. This paper focuses on audio and speech classification, and aims to reduce the need for large amounts of labeled data for AST by leveraging self-supervised learning using unlabeled data. Specifically, we propose to pretrain the AST model with joint discriminative and generative masked spectrogram patch modeling (MSPM) using unlabeled audio from AudioSet and Librispeech. We evaluate our pretrained models on both audio and speech classification tasks including audio event classification, keyword spotting, emotion recognition, and speaker identification. The proposed self-supervised framework significantly boosts AST performance on all tasks, with an average improvement of 60.9%, leading to similar or even better results than a supervised pretrained AST. To the best of our knowledge, it is the first patch-based self-supervised learning framework in the audio and speech domain, and also the first self-supervised learning framework for AST.

Results

TaskDatasetMetricValueModel
Speaker IdentificationVoxCeleb1Accuracy80.8SSAST-FRAME
Speaker IdentificationVoxCeleb1Top-1 (%)80.8SSAST-FRAME
Speaker IdentificationVoxCeleb1Accuracy64.2SSAST-PATCH
Speaker IdentificationVoxCeleb1Top-1 (%)64.2SSAST-PATCH
Audio ClassificationBalanced Audio SetMean AP31SSAST-PATCH
Audio ClassificationBalanced Audio SetMean AP29.2SSAST-FRAME
ClassificationBalanced Audio SetMean AP31SSAST-PATCH
ClassificationBalanced Audio SetMean AP29.2SSAST-FRAME
Spoken Command RecognitionSpeech Command v2Accuracy98.1SSAST-FRAME
Spoken Command RecognitionSpeech Command v2Accuracy98SSAST-PATCH

Related Papers

Long-Short Distance Graph Neural Networks and Improved Curriculum Learning for Emotion Recognition in Conversation2025-07-21Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Camera-based implicit mind reading by capturing higher-order semantic dynamics of human gaze within environmental context2025-07-17Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Safeguarding Federated Learning-based Road Condition Classification2025-07-16