TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Speaker Conditional WaveRNN: Towards Universal Neural Voco...

Speaker Conditional WaveRNN: Towards Universal Neural Vocoder for Unseen Speaker and Recording Conditions

Dipjyoti Paul, Yannis Pantazis, Yannis Stylianou

2020-08-09Text to SpeechSpeech Synthesistext-to-speech
PaperPDFCode

Abstract

Recent advancements in deep learning led to human-level performance in single-speaker speech synthesis. However, there are still limitations in terms of speech quality when generalizing those systems into multiple-speaker models especially for unseen speakers and unseen recording qualities. For instance, conventional neural vocoders are adjusted to the training speaker and have poor generalization capabilities to unseen speakers. In this work, we propose a variant of WaveRNN, referred to as speaker conditional WaveRNN (SC-WaveRNN). We target towards the development of an efficient universal vocoder even for unseen speakers and recording conditions. In contrast to standard WaveRNN, SC-WaveRNN exploits additional information given in the form of speaker embeddings. Using publicly-available data for training, SC-WaveRNN achieves significantly better performance over baseline WaveRNN on both subjective and objective metrics. In MOS, SC-WaveRNN achieves an improvement of about 23% for seen speaker and seen recording condition and up to 95% for unseen speaker and unseen condition. Finally, we extend our work by implementing a multi-speaker text-to-speech (TTS) synthesis similar to zero-shot speaker adaptation. In terms of performance, our system has been preferred over the baseline TTS system by 60% over 15.5% and by 60.9% over 32.6%, for seen and unseen speakers, respectively.

Results

TaskDatasetMetricValueModel
Speech RecognitionLibriTTSM-STFT2.2358SC-WaveRNN
Speech RecognitionLibriTTSMCD1.8854SC-WaveRNN
Speech RecognitionLibriTTSPESQ1.701SC-WaveRNN
Speech RecognitionLibriTTSPeriodicity0.3044SC-WaveRNN
Speech RecognitionLibriTTSV/UV F10.8144SC-WaveRNN
Speech SynthesisLibriTTSM-STFT2.2358SC-WaveRNN
Speech SynthesisLibriTTSMCD1.8854SC-WaveRNN
Speech SynthesisLibriTTSPESQ1.701SC-WaveRNN
Speech SynthesisLibriTTSPeriodicity0.3044SC-WaveRNN
Speech SynthesisLibriTTSV/UV F10.8144SC-WaveRNN
Accented Speech RecognitionLibriTTSM-STFT2.2358SC-WaveRNN
Accented Speech RecognitionLibriTTSMCD1.8854SC-WaveRNN
Accented Speech RecognitionLibriTTSPESQ1.701SC-WaveRNN
Accented Speech RecognitionLibriTTSPeriodicity0.3044SC-WaveRNN
Accented Speech RecognitionLibriTTSV/UV F10.8144SC-WaveRNN

Related Papers

Hear Your Code Fail, Voice-Assisted Debugging for Python2025-07-20NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech2025-07-17P.808 Multilingual Speech Enhancement Testing: Approach and Results of URGENT 2025 Challenge2025-07-15An Empirical Evaluation of AI-Powered Non-Player Characters' Perceived Realism and Performance in Virtual Reality Environments2025-07-14ZipVoice-Dialog: Non-Autoregressive Spoken Dialogue Generation with Flow Matching2025-07-12Exploiting Leaderboards for Large-Scale Distribution of Malicious Models2025-07-11MIDI-VALLE: Improving Expressive Piano Performance Synthesis Through Neural Codec Language Modelling2025-07-11Speech Quality Assessment Model Based on Mixture of Experts: System-Level Performance Enhancement and Utterance-Level Challenge Analysis2025-07-08