Kwangyoun Kim, Felix Wu, Yifan Peng, Jing Pan, Prashant Sridhar, Kyu J. Han, Shinji Watanabe
Conformer, combining convolution and self-attention sequentially to capture both local and global information, has shown remarkable performance and is currently regarded as the state-of-the-art for automatic speech recognition (ASR). Several other studies have explored integrating convolution and self-attention but they have not managed to match Conformer's performance. The recently introduced Branchformer achieves comparable performance to Conformer by using dedicated branches of convolution and self-attention and merging local and global context from each branch. In this paper, we propose E-Branchformer, which enhances Branchformer by applying an effective merging method and stacking additional point-wise modules. E-Branchformer sets new state-of-the-art word error rates (WERs) 1.81% and 3.65% on LibriSpeech test-clean and test-other sets without using any external training data.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Speech Recognition | LibriSpeech test-clean | Word Error Rate (WER) | 1.81 | E-Branchformer (L) + Internal Language Model Estimation |
| Speech Recognition | LibriSpeech test-other | Word Error Rate (WER) | 3.65 | E-Branchformer (L) + Internal Language Model Estimation |