TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/D3Net: Densely connected multidilated DenseNet for music s...

D3Net: Densely connected multidilated DenseNet for music source separation

Naoya Takahashi, Yuki Mitsufuji

2020-10-05Music Source Separation
PaperPDFCode(official)

Abstract

Music source separation involves a large input field to model a long-term dependence of an audio signal. Previous convolutional neural network (CNN)-based approaches address the large input field modeling using sequentially down- and up-sampling feature maps or dilated convolution. In this paper, we claim the importance of a rapid growth of a receptive field and a simultaneous modeling of multi-resolution data in a single convolution layer, and propose a novel CNN architecture called densely connected dilated DenseNet (D3Net). D3Net involves a novel multi-dilated convolution that has different dilation factors in a single layer to model different resolutions simultaneously. By combining the multi-dilated convolution with DenseNet architecture, D3Net avoids the aliasing problem that exists when we naively incorporate the dilated convolution in DenseNet. Experimental results on MUSDB18 dataset show that D3Net achieves state-of-the-art performance with an average signal to distortion ratio (SDR) of 6.01 dB.

Results

TaskDatasetMetricValueModel
Music Source SeparationMUSDB18SDR (avg)6.68D3Net
Music Source SeparationMUSDB18SDR (bass)6.2D3Net
Music Source SeparationMUSDB18SDR (drums)7.36D3Net
Music Source SeparationMUSDB18SDR (other)5.37D3Net
Music Source SeparationMUSDB18SDR (vocals)7.8D3Net
Music Source SeparationMUSDB18SDR (avg)6.01D3Net
Music Source SeparationMUSDB18SDR (bass)5.25D3Net
Music Source SeparationMUSDB18SDR (drums)7.01D3Net
Music Source SeparationMUSDB18SDR (other)4.53D3Net
Music Source SeparationMUSDB18SDR (vocals)7.24D3Net
2D ClassificationMUSDB18SDR (avg)6.68D3Net
2D ClassificationMUSDB18SDR (bass)6.2D3Net
2D ClassificationMUSDB18SDR (drums)7.36D3Net
2D ClassificationMUSDB18SDR (other)5.37D3Net
2D ClassificationMUSDB18SDR (vocals)7.8D3Net
2D ClassificationMUSDB18SDR (avg)6.01D3Net
2D ClassificationMUSDB18SDR (bass)5.25D3Net
2D ClassificationMUSDB18SDR (drums)7.01D3Net
2D ClassificationMUSDB18SDR (other)4.53D3Net
2D ClassificationMUSDB18SDR (vocals)7.24D3Net

Related Papers

Music Source Restoration2025-05-27Training-Free Multi-Step Audio Source Separation2025-05-26Is MixIT Really Unsuitable for Correlated Sources? Exploring MixIT for Unsupervised Pre-training in Music Source Separation2025-05-12Solving Copyright Infringement on Short Video Platforms: Novel Datasets and an Audio Restoration Deep Learning Pipeline2025-04-30Score-informed Music Source Separation: Improving Synthetic-to-real Generalization in Classical Music2025-03-10Separate This, and All of these Things Around It: Music Source Separation via Hyperellipsoidal Queries2025-01-27Sanidha: A Studio Quality Multi-Modal Dataset for Carnatic Music2025-01-12MAJL: A Model-Agnostic Joint Learning Framework for Music Source Separation and Pitch Estimation2025-01-07