Neil Zeghidour, Qiantong Xu, Vitaliy Liptchinsky, Nicolas Usunier, Gabriel Synnaeve, Ronan Collobert
Current state-of-the-art speech recognition systems build on recurrent neural networks for acoustic and/or language modeling, and rely on feature extraction pipelines to extract mel-filterbanks or cepstral coefficients. In this paper we present an alternative approach based solely on convolutional neural networks, leveraging recent advances in acoustic models from the raw waveform and language modeling. This fully convolutional approach is trained end-to-end to predict characters from the raw waveform, removing the feature extraction step altogether. An external convolutional language model is used to decode words. On Wall Street Journal, our model matches the current state-of-the-art. On Librispeech, we report state-of-the-art performance among end-to-end models, including Deep Speech 2 trained with 12 times more acoustic data and significantly more linguistic data.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Speech Recognition | WSJ dev93 | Word Error Rate (WER) | 6.8 | Convolutional Speech Recognition |
| Speech Recognition | WSJ eval92 | Word Error Rate (WER) | 3.5 | Convolutional Speech Recognition |
| Speech Recognition | WSJ eval93 | Word Error Rate (WER) | 6.8 | Convolutional Speech Recognition |
| Speech Recognition | LibriSpeech test-clean | Word Error Rate (WER) | 3.26 | Convolutional Speech Recognition |
| Speech Recognition | LibriSpeech test-other | Word Error Rate (WER) | 10.47 | Convolutional Speech Recognition |