TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Masked Modeling Duo: Towards a Universal Audio Pre-trainin...

Masked Modeling Duo: Towards a Universal Audio Pre-training Framework

Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Noboru Harada, Kunio Kashino

2024-04-09DenoisingSpeaker IdentificationEnvironment Sound ClassificationAudio ClassificationSelf-Supervised LearningLinear evaluation
PaperPDFCode(official)Code(official)

Abstract

Self-supervised learning (SSL) using masked prediction has made great strides in general-purpose audio representation. This study proposes Masked Modeling Duo (M2D), an improved masked prediction SSL, which learns by predicting representations of masked input signals that serve as training signals. Unlike conventional methods, M2D obtains a training signal by encoding only the masked part, encouraging the two networks in M2D to model the input. While M2D improves general-purpose audio representations, a specialized representation is essential for real-world applications, such as in industrial and medical domains. The often confidential and proprietary data in such domains is typically limited in size and has a different distribution from that in pre-training datasets. Therefore, we propose M2D for X (M2D-X), which extends M2D to enable the pre-training of specialized representations for an application X. M2D-X learns from M2D and an additional task and inputs background noise. We make the additional task configurable to serve diverse applications, while the background noise helps learn on small data and forms a denoising task that makes representation robust. With these design choices, M2D-X should learn a representation specialized to serve various application needs. Our experiments confirmed that the representations for general-purpose audio, specialized for the highly competitive AudioSet and speech domain, and a small-data medical task achieve top-level performance, demonstrating the potential of using our models as a universal audio pre-training framework. Our code is available online for future studies at https://github.com/nttcslab/m2d

Results

TaskDatasetMetricValueModel
Speaker IdentificationVoxCeleb1Accuracy96.6MSM-MAE
Speaker IdentificationVoxCeleb1Top-1 (%)96.6MSM-MAE
Speaker IdentificationVoxCeleb1Accuracy96.5M2D/0.6
Speaker IdentificationVoxCeleb1Top-1 (%)96.5M2D/0.6
Speaker IdentificationVoxCeleb1Accuracy96.3M2D/0.7
Speaker IdentificationVoxCeleb1Top-1 (%)96.3M2D/0.7
Audio ClassificationESC-50Accuracy (5-fold)97.2M2D-AS/0.7
Audio ClassificationESC-50Top-1 Accuracy97.2M2D-AS/0.7
Audio ClassificationESC-50Accuracy (5-fold)96M2D/0.7
Audio ClassificationESC-50Top-1 Accuracy96M2D/0.7
Audio ClassificationICBHI Respiratory Sound DatabaseICBHI Score63.29M2D-X/0.7 (η=0.3)
Audio ClassificationICBHI Respiratory Sound DatabaseICBHI Score62.73M2D/0.7 (e=0.3)
Audio ClassificationAudio SetMean AP48.5M2D-AS/0.7
Audio ClassificationAudioSetTest mAP0.485M2D-AS/0.7
Audio ClassificationAudioSetTest mAP0.479M2D/0.7
ClassificationESC-50Accuracy (5-fold)97.2M2D-AS/0.7
ClassificationESC-50Top-1 Accuracy97.2M2D-AS/0.7
ClassificationESC-50Accuracy (5-fold)96M2D/0.7
ClassificationESC-50Top-1 Accuracy96M2D/0.7
ClassificationICBHI Respiratory Sound DatabaseICBHI Score63.29M2D-X/0.7 (η=0.3)
ClassificationICBHI Respiratory Sound DatabaseICBHI Score62.73M2D/0.7 (e=0.3)
ClassificationAudio SetMean AP48.5M2D-AS/0.7
ClassificationAudioSetTest mAP0.485M2D-AS/0.7
ClassificationAudioSetTest mAP0.479M2D/0.7

Related Papers

fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting2025-07-17Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models2025-07-17Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16HUG-VAS: A Hierarchical NURBS-Based Generative Model for Aortic Geometry Synthesis and Controllable Editing2025-07-15AirLLM: Diffusion Policy-based Adaptive LoRA for Remote Fine-Tuning of LLM over the Air2025-07-15