TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Unsupervised Source Separation via Bayesian Inference in t...

Unsupervised Source Separation via Bayesian Inference in the Latent Domain

Michele Mancusi, Emilian Postolache, Giorgio Mariani, Marco Fumero, Andrea Santilli, Luca Cosmo, Emanuele RodolĂ 

2021-10-11Audio Source SeparationBayesian InferenceMusic Source Separation
PaperPDFCode(official)

Abstract

State of the art audio source separation models rely on supervised data-driven approaches, which can be expensive in terms of labeling resources. On the other hand, approaches for training these models without any direct supervision are typically high-demanding in terms of memory and time requirements, and remain impractical to be used at inference time. We aim to tackle these limitations by proposing a simple yet effective unsupervised separation algorithm, which operates directly on a latent representation of time-domain signals. Our algorithm relies on deep Bayesian priors in the form of pre-trained autoregressive networks to model the probability distributions of each source. We leverage the low cardinality of the discrete latent space, trained with a novel loss term imposing a precise arithmetic structure on it, to perform exact Bayesian inference without relying on an approximation strategy. We validate our approach on the Slakh dataset arXiv:1909.08494, demonstrating results in line with state of the art supervised approaches while requiring fewer resources with respect to other unsupervised methods.

Results

TaskDatasetMetricValueModel
Music Source SeparationSlakh2100SDR (bass)7.42LQ-VAE + Scalable Transformer
Music Source SeparationSlakh2100SDR (drums)5.83LQ-VAE + Scalable Transformer
2D ClassificationSlakh2100SDR (bass)7.42LQ-VAE + Scalable Transformer
2D ClassificationSlakh2100SDR (drums)5.83LQ-VAE + Scalable Transformer

Related Papers

Towards Reliable Objective Evaluation Metrics for Generative Singing Voice Separation Models2025-07-15A Simple Approximate Bayesian Inference Neural Surrogate for Stochastic Petri Net Models2025-07-14The Bayesian Approach to Continual Learning: An Overview2025-07-11Estimating Interventional Distributions with Uncertain Causal Graphs through Meta-Learning2025-07-07Scalable Bayesian Low-Rank Adaptation of Large Language Models via Stochastic Variational Subspace Inference2025-06-26Bayesian Evolutionary Swarm Architecture: A Formal Epistemic System Grounded in Truth-Based Competition2025-06-23Generative Diffusion Receivers: Achieving Pilot-Efficient MIMO-OFDM Communications2025-06-23Coherent Track-Before-Detect2025-06-22