TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Speech Enhancement and Dereverberation with Diffusion-base...

Speech Enhancement and Dereverberation with Diffusion-based Generative Models

Julius Richter, Simon Welker, Jean-Marie Lemercier, Bunlong Lay, Timo Gerkmann

2022-08-11IEEE/ACM Transactions on Audio, Speech, and Language Processing 2023 6Speech EnhancementSpeech Dereverberation
PaperPDFCode(official)

Abstract

In this work, we build upon our previous publication and use diffusion-based generative models for speech enhancement. We present a detailed overview of the diffusion process that is based on a stochastic differential equation and delve into an extensive theoretical examination of its implications. Opposed to usual conditional generation tasks, we do not start the reverse process from pure Gaussian noise but from a mixture of noisy speech and Gaussian noise. This matches our forward process which moves from clean speech to noisy speech by including a drift term. We show that this procedure enables using only 30 diffusion steps to generate high-quality clean speech estimates. By adapting the network architecture, we are able to significantly improve the speech enhancement performance, indicating that the network, rather than the formalism, was the main limitation of our original approach. In an extensive cross-dataset evaluation, we show that the improved method can compete with recent discriminative models and achieves better generalization when evaluating on a different corpus than used for training. We complement the results with an instrumental evaluation using real-world noisy recordings and a listening experiment, in which our proposed method is rated best. Examining different sampler configurations for solving the reverse process allows us to balance the performance and computational speed of the proposed method. Moreover, we show that the proposed method is also suitable for dereverberation and thus not limited to additive background noise removal. Code and audio examples are available online, see https://github.com/sp-uhh/sgmse

Results

TaskDatasetMetricValueModel
Speech EnhancementVoiceBank + DEMANDPESQ (wb)2.93SGMSE+ (Diffusion Model)
Speech EnhancementEARS-WHAMDNSMOS3.88SGMSE+
Speech EnhancementEARS-WHAMESTOI0.73SGMSE+
Speech EnhancementEARS-WHAMPESQ-WB2.5SGMSE+
Speech EnhancementEARS-WHAMPOLQA3.4SGMSE+
Speech EnhancementEARS-WHAMSI-SDR16.78SGMSE+
Speech EnhancementEARS-WHAMSIGMOS3.41SGMSE+
Speech EnhancementEARS-ReverbESTOI0.85SGMSE+
Speech EnhancementEARS-ReverbMOS Reverb4.73SGMSE+
Speech EnhancementEARS-ReverbPESQ-WB3.03SGMSE+
Speech EnhancementEARS-ReverbSI-SDR5.79SGMSE+
Speech EnhancementEARS-ReverbSIGMOS3.49SGMSE+

Related Papers

Autoregressive Speech Enhancement via Acoustic Tokens2025-07-17P.808 Multilingual Speech Enhancement Testing: Approach and Results of URGENT 2025 Challenge2025-07-15Robust One-step Speech Enhancement via Consistency Distillation2025-07-08Speech Quality Assessment Model Based on Mixture of Experts: System-Level Performance Enhancement and Utterance-Level Challenge Analysis2025-07-08MambAttention: Mamba with Multi-Head Attention for Generalizable Single-Channel Speech Enhancement2025-07-01Frequency-Weighted Training Losses for Phoneme-Level DNN-based Speech Enhancement2025-06-23EDNet: A Distortion-Agnostic Speech Enhancement Framework with Gating Mamba Mechanism and Phase Shift-Invariant Training2025-06-19A Comparative Evaluation of Deep Learning Models for Speech Enhancement in Real-World Noisy Environments2025-06-17