TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-View Attention Transfer for Efficient Speech Enhance...

Multi-View Attention Transfer for Efficient Speech Enhancement

WooSeok Shin, Hyun Joon Park, Jin Sob Kim, Byung Hoon Lee, Sung Won Han

2022-08-22Knowledge DistillationSpeech Enhancement
PaperPDF

Abstract

Recent deep learning models have achieved high performance in speech enhancement; however, it is still challenging to obtain a fast and low-complexity model without significant performance degradation. Previous knowledge distillation studies on speech enhancement could not solve this problem because their output distillation methods do not fit the speech enhancement task in some aspects. In this study, we propose multi-view attention transfer (MV-AT), a feature-based distillation, to obtain efficient speech enhancement models in the time domain. Based on the multi-view features extraction model, MV-AT transfers multi-view knowledge of the teacher network to the student network without additional parameters. The experimental results show that the proposed method consistently improved the performance of student models of various sizes on the Valentini and deep noise suppression (DNS) datasets. MANNER-S-8.1GF with our proposed method, a lightweight model for efficient deployment, achieved 15.4x and 4.71x fewer parameters and floating-point operations (FLOPs), respectively, compared to the baseline model with similar performance.

Results

TaskDatasetMetricValueModel
Speech EnhancementVoiceBank + DEMANDCBAK3.61MANNER-S + MV-AT (8.1GF)
Speech EnhancementVoiceBank + DEMANDCOVL3.82MANNER-S + MV-AT (8.1GF)
Speech EnhancementVoiceBank + DEMANDCSIG4.45MANNER-S + MV-AT (8.1GF)
Speech EnhancementVoiceBank + DEMANDPESQ (wb)3.12MANNER-S + MV-AT (8.1GF)
Speech EnhancementVoiceBank + DEMANDPara. (M)1.38MANNER-S + MV-AT (8.1GF)
Speech EnhancementVoiceBank + DEMANDSTOI95MANNER-S + MV-AT (8.1GF)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17Autoregressive Speech Enhancement via Acoustic Tokens2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15P.808 Multilingual Speech Enhancement Testing: Approach and Results of URGENT 2025 Challenge2025-07-15Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14KAT-V1: Kwai-AutoThink Technical Report2025-07-11