TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Separate and Reconstruct: Asymmetric Encoder-Decoder for S...

Separate and Reconstruct: Asymmetric Encoder-Decoder for Speech Separation

Ui-Hyeop Shin, Sangyoun Lee, Taehan Kim, Hyung-Min Park

2024-06-10Speech SeparationChunking
PaperPDFCode(official)

Abstract

In speech separation, time-domain approaches have successfully replaced the time-frequency domain with latent sequence feature from a learnable encoder. Conventionally, the feature is separated into speaker-specific ones at the final stage of the network. Instead, we propose a more intuitive strategy that separates features earlier by expanding the feature sequence to the number of speakers as an extra dimension. To achieve this, an asymmetric strategy is presented in which the encoder and decoder are partitioned to perform distinct processing in separation tasks. The encoder analyzes features, and the output of the encoder is split into the number of speakers to be separated. The separated sequences are then reconstructed by the weight-shared decoder, which also performs cross-speaker processing. Without relying on speaker information, the weight-shared network in the decoder directly learns to discriminate features using a separation objective. In addition, to improve performance, traditional methods have extended the sequence length, leading to the adoption of dual-path models, which handle the much longer sequence effectively by segmenting it into chunks. To address this, we introduce global and local Transformer blocks that can directly handle long sequences more efficiently without chunking and dual-path processing. The experimental results demonstrated that this asymmetric structure is effective and that the combination of proposed global and local Transformer can sufficiently replace the role of inter- and intra-chunk processing in dual-path structure. Finally, the presented model combining both of these achieved state-of-the-art performance with much less computation in various benchmark datasets.

Results

TaskDatasetMetricValueModel
Speech SeparationWHAMR!SI-SDRi17.1SepReformer-L + DM
Speech SeparationWSJ0-2mixMACs (G)155.5SepReformer-L
Speech SeparationWSJ0-2mixNumber of parameters (M)59.4SepReformer-L
Speech SeparationWSJ0-2mixSDRi25.2SepReformer-L
Speech SeparationWSJ0-2mixSI-SDRi25.1SepReformer-L
Speech SeparationWHAM!SI-SDRi18.4SepReformer-L + DM

Related Papers

Dynamic Chunking for End-to-End Hierarchical Sequence Modeling2025-07-10CLI-RAG: A Retrieval-Augmented Framework for Clinically Structured and Context Aware Text Generation with LLMs2025-07-09Dynamic Slimmable Networks for Efficient Speech Separation2025-07-08Can LLMs Replace Humans During Code Chunking?2025-06-24CronusVLA: Transferring Latent Motion Across Time for Multi-Frame Prediction in Manipulation2025-06-24cAST: Enhancing Code Retrieval-Augmented Generation with Structural Chunking via Abstract Syntax Tree2025-06-18Improving Practical Aspects of End-to-End Multi-Talker Speech Recognition for Online and Offline Scenarios2025-06-17Chunk Twice, Embed Once: A Systematic Study of Segmentation and Representation Trade-offs in Chemistry-Aware Retrieval-Augmented Generation2025-06-13