TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Mas...

Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation

Yi Luo, Nima Mesgarani

2018-09-20Speech SeparationMulti-task Audio Source SeperationSpeech EnhancementMusic Source SeparationSpeaker Separation
PaperPDFCodeCodeCodeCodeCode(official)CodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two- and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications.

Results

TaskDatasetMetricValueModel
Music Source SeparationMUSDB18SDR (avg)6.32Conv-TasNet (extra)
Music Source SeparationMUSDB18SDR (bass)7Conv-TasNet (extra)
Music Source SeparationMUSDB18SDR (drums)7.11Conv-TasNet (extra)
Music Source SeparationMUSDB18SDR (vocals)6.74Conv-TasNet (extra)
Music Source SeparationMUSDB18SDR (avg)5.73Conv-TasNet
Music Source SeparationMUSDB18SDR (bass)5.66Conv-TasNet
Music Source SeparationMUSDB18SDR (drums)6.08Conv-TasNet
Music Source SeparationMUSDB18SDR (other)4.37Conv-TasNet
Music Source SeparationMUSDB18SDR (vocals)6.81Conv-TasNet
Speech SeparationWSJ0-2mixNumber of parameters (M)5.1Conv-TasNet
Speech SeparationWSJ0-2mixSDRi15.6Conv-TasNet
Speech SeparationWSJ0-2mixSI-SDRi15.3Conv-TasNet
Speech EnhancementEARS-WHAMDNSMOS3.47Conv-TasNet
Speech EnhancementEARS-WHAMESTOI0.7Conv-TasNet
Speech EnhancementEARS-WHAMPESQ-WB2.31Conv-TasNet
Speech EnhancementEARS-WHAMPOLQA2.73Conv-TasNet
Speech EnhancementEARS-WHAMSI-SDR16.93Conv-TasNet
Speech EnhancementEARS-WHAMSIGMOS2.69Conv-TasNet
2D ClassificationMUSDB18SDR (avg)6.32Conv-TasNet (extra)
2D ClassificationMUSDB18SDR (bass)7Conv-TasNet (extra)
2D ClassificationMUSDB18SDR (drums)7.11Conv-TasNet (extra)
2D ClassificationMUSDB18SDR (vocals)6.74Conv-TasNet (extra)
2D ClassificationMUSDB18SDR (avg)5.73Conv-TasNet
2D ClassificationMUSDB18SDR (bass)5.66Conv-TasNet
2D ClassificationMUSDB18SDR (drums)6.08Conv-TasNet
2D ClassificationMUSDB18SDR (other)4.37Conv-TasNet
2D ClassificationMUSDB18SDR (vocals)6.81Conv-TasNet

Related Papers

Autoregressive Speech Enhancement via Acoustic Tokens2025-07-17P.808 Multilingual Speech Enhancement Testing: Approach and Results of URGENT 2025 Challenge2025-07-15Dynamic Slimmable Networks for Efficient Speech Separation2025-07-08Robust One-step Speech Enhancement via Consistency Distillation2025-07-08Speech Quality Assessment Model Based on Mixture of Experts: System-Level Performance Enhancement and Utterance-Level Challenge Analysis2025-07-08MambAttention: Mamba with Multi-Head Attention for Generalizable Single-Channel Speech Enhancement2025-07-01Frequency-Weighted Training Losses for Phoneme-Level DNN-based Speech Enhancement2025-06-23EDNet: A Distortion-Agnostic Speech Enhancement Framework with Gating Mamba Mechanism and Phase Shift-Invariant Training2025-06-19