TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Vector-quantized neural networks for acoustic unit discove...

Vector-quantized neural networks for acoustic unit discovery in the ZeroSpeech 2020 challenge

Benjamin van Niekerk, Leanne Nortje, Herman Kamper

2020-05-19Voice Conversion
PaperPDFCodeCode

Abstract

In this paper, we explore vector quantization for acoustic unit discovery. Leveraging unlabelled data, we aim to learn discrete representations of speech that separate phonetic content from speaker-specific details. We propose two neural models to tackle this challenge - both use vector quantization to map continuous features to a finite set of codes. The first model is a type of vector-quantized variational autoencoder (VQ-VAE). The VQ-VAE encodes speech into a sequence of discrete units before reconstructing the audio waveform. Our second model combines vector quantization with contrastive predictive coding (VQ-CPC). The idea is to learn a representation of speech by predicting future acoustic units. We evaluate the models on English and Indonesian data for the ZeroSpeech 2020 challenge. In ABX phone discrimination tests, both models outperform all submissions to the 2019 and 2020 challenges, with a relative improvement of more than 30%. The models also perform competitively on a downstream voice conversion task. Of the two, VQ-CPC performs slightly better in general and is simpler and faster to train. Finally, probing experiments show that vector quantization is an effective bottleneck, forcing the models to discard speaker information.

Results

TaskDatasetMetricValueModel
Voice ConversionZeroSpeech 2019 EnglishSpeaker Similarity3.8VQ-CPC
Voice ConversionZeroSpeech 2019 EnglishSpeaker Similarity3.49VQ-VAE
2D ClassificationZeroSpeech 2019 EnglishSpeaker Similarity3.8VQ-CPC
2D ClassificationZeroSpeech 2019 EnglishSpeaker Similarity3.49VQ-VAE
1 Image, 2*2 StitchiZeroSpeech 2019 EnglishSpeaker Similarity3.8VQ-CPC
1 Image, 2*2 StitchiZeroSpeech 2019 EnglishSpeaker Similarity3.49VQ-VAE

Related Papers

RT-VC: Real-Time Zero-Shot Voice Conversion with Speech Articulatory Coding2025-06-12Training-Free Voice Conversion with Factorized Optimal Transport2025-06-11CO-VADA: A Confidence-Oriented Voice Augmentation Debiasing Approach for Fair Speech Emotion Recognition2025-06-06Towards Better Disentanglement in Non-Autoregressive Zero-Shot Expressive Voice Conversion2025-06-04StarVC: A Unified Auto-Regressive Framework for Joint Text and Speech Generation in Voice Conversion2025-06-03Unsupervised Rhythm and Voice Conversion to Improve ASR on Dysarthric Speech2025-06-02SALF-MOS: Speaker Agnostic Latent Features Downsampled for MOS Prediction2025-06-02LinearVC: Linear transformations of self-supervised features through the lens of voice conversion2025-06-02