TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Self-training and Pre-training are Complementary for Speec...

Self-training and Pre-training are Complementary for Speech Recognition

Qiantong Xu, Alexei Baevski, Tatiana Likhomanenko, Paden Tomasello, Alexis Conneau, Ronan Collobert, Gabriel Synnaeve, Michael Auli

2020-10-22Speech RecognitionUnsupervised Pre-trainingspeech-recognition
PaperPDFCode(official)Code(official)Code

Abstract

Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data. However, it is not clear whether they learn similar patterns or if they can be effectively combined. In this paper, we show that pseudo-labeling and pre-training with wav2vec 2.0 are complementary in a variety of labeled data setups. Using just 10 minutes of labeled data from Libri-light as well as 53k hours of unlabeled data from LibriVox achieves WERs of 3.0%/5.2% on the clean and other test sets of Librispeech - rivaling the best published systems trained on 960 hours of labeled data only a year ago. Training on all labeled data of Librispeech achieves WERs of 1.5%/3.1%.

Results

TaskDatasetMetricValueModel
Speech RecognitionLibriSpeech train-clean-100 test-otherWord Error Rate (WER)3.6wav2vec_wav2letter
Speech RecognitionLibriSpeech test-cleanWord Error Rate (WER)1.5Conv + Transformer + wav2vec2.0 + pseudo labeling
Speech RecognitionLibriSpeech test-cleanWord Error Rate (WER)2.7wav2vec_wav2letter
Speech RecognitionLibriSpeech train-clean-100 test-cleanWord Error Rate (WER)2.8wav2vec_wav2letter
Speech RecognitionLibriSpeech test-otherWord Error Rate (WER)3.1Conv + Transformer + wav2vec2.0 + pseudo labeling

Related Papers

Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech2025-07-17WhisperKit: On-device Real-time ASR with Billion-Scale Transformers2025-07-14VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis2025-07-08A Hybrid Machine Learning Framework for Optimizing Crop Selection via Agronomic and Economic Forecasting2025-07-06First Steps Towards Voice Anonymization for Code-Switching Speech2025-07-02MambAttention: Mamba with Multi-Head Attention for Generalizable Single-Channel Speech Enhancement2025-07-01AUTOMATIC PRONUNCIATION MISTAKE DETECTOR PROJECT REPORT2025-06-25