Metric: Percentage error (lower is better)
| # | Model↕ | Percentage error▲ | Extra Data | Paper | Date↕ | Code |
|---|---|---|---|---|---|---|
| 1 | IBM (LSTM+Conformer encoder-decoder) | 6.8 | No | On the limit of English conversational speech re... | 2021-05-03 | - |
| 2 | IBM (LSTM encoder-decoder) | 7.8 | No | Single headed attention based sequence-to-sequen... | 2020-01-20 | - |
| 3 | ResNet + BiLSTMs acoustic model | 10.3 | No | English Conversational Telephone Speech Recognit... | 2017-03-06 | - |
| 4 | VGG/Resnet/LACE/BiLSTM acoustic model trained on SWB+Fisher+CH, N-gram + RNNLM language model trained on Switchboard+Fisher+Gigaword+Broadcast | 11.9 | No | The Microsoft 2016 Conversational Speech Recogni... | 2016-09-12 | - |
| 5 | RNN + VGG + LSTM acoustic model trained on SWB+Fisher+CH, N-gram + "model M" + NNLM language model | 12.2 | No | The IBM 2016 English Conversational Telephone Sp... | 2016-04-27 | - |
| 6 | HMM-BLSTM trained with MMI + data augmentation (speed) + iVectors + 3 regularizations + Fisher | 13 | No | - | - | - |
| 7 | HMM-TDNN trained with MMI + data augmentation (speed) + iVectors + 3 regularizations + Fisher (10% / 15.1% respectively trained on SWBD only) | 13.3 | No | - | - | - |
| 8 | CNN + Bi-RNN + CTC (speech to letters), 25.9% WER if trainedonlyon SWB | 16 | No | Deep Speech: Scaling up end-to-end speech recogn... | 2014-12-17 | Code |
| 9 | HMM-TDNN + iVectors | 17.1 | No | - | - | - |
| 10 | HMM-DNN +sMBR | 18.4 | No | - | - | - |
| 11 | DNN + Dropout | 19.1 | No | Building DNN Acoustic Models for Large Vocabular... | 2014-06-30 | Code |
| 12 | HMM-TDNN + pNorm + speed up/down speech | 19.3 | No | - | - | - |