TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Sub-word Level Lip Reading With Visual Attention

Sub-word Level Lip Reading With Visual Attention

K R Prajwal, Triantafyllos Afouras, Andrew Zisserman

2021-10-14CVPR 2022 1Speech RecognitionAutomatic Speech RecognitionAudio-Visual Active Speaker DetectionAutomatic Speech Recognition (ASR)speech-recognitionVisual Speech RecognitionLipreadingLip Reading
PaperPDF

Abstract

The goal of this paper is to learn strong lip reading models that can recognise speech in silent videos. Most prior works deal with the open-set visual speech recognition problem by adapting existing automatic speech recognition techniques on top of trivially pooled visual features. Instead, in this paper we focus on the unique challenges encountered in lip reading and propose tailored solutions. To this end, we make the following contributions: (1) we propose an attention-based pooling mechanism to aggregate visual speech representations; (2) we use sub-word units for lip reading for the first time and show that this allows us to better model the ambiguities of the task; (3) we propose a model for Visual Speech Detection (VSD), trained on top of the lip reading network. Following the above, we obtain state-of-the-art results on the challenging LRS2 and LRS3 benchmarks when training on public datasets, and even surpass models trained on large-scale industrial datasets by using an order of magnitude less data. Our best model achieves 22.6% word error rate on the LRS2 dataset, a performance unprecedented for lip reading models, significantly reducing the performance gap between lip reading and automatic speech recognition. Moreover, on the AVA-ActiveSpeaker benchmark, our VSD model surpasses all visual-only baselines and even outperforms several recent audio-visual methods.

Results

TaskDatasetMetricValueModel
Speech RecognitionLRS3-TEDWord Error Rate (WER)30.7VTP with more data
Speech RecognitionLRS3-TEDWord Error Rate (WER)40.6VTP
Speech RecognitionLRS2Word Error Rate (WER)22.6VTP with more data
Speech RecognitionLRS2Word Error Rate (WER)28.9VTP
LipreadingLRS2Word Error Rate (WER)22.6VTP (more data)
LipreadingLRS2Word Error Rate (WER)28.9VTP
LipreadingLRS3-TEDWord Error Rate (WER)30.7VTP (more data)
LipreadingLRS3-TEDWord Error Rate (WER)40.6VTP
Natural Language TransductionLRS2Word Error Rate (WER)22.6VTP (more data)
Natural Language TransductionLRS2Word Error Rate (WER)28.9VTP
Natural Language TransductionLRS3-TEDWord Error Rate (WER)30.7VTP (more data)
Natural Language TransductionLRS3-TEDWord Error Rate (WER)40.6VTP
Visual Speech RecognitionLRS3-TEDWord Error Rate (WER)30.7VTP with more data
Visual Speech RecognitionLRS3-TEDWord Error Rate (WER)40.6VTP
Visual Speech RecognitionLRS2Word Error Rate (WER)22.6VTP with more data
Visual Speech RecognitionLRS2Word Error Rate (WER)28.9VTP

Related Papers

Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech2025-07-17WhisperKit: On-device Real-time ASR with Billion-Scale Transformers2025-07-14VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis2025-07-08A Hybrid Machine Learning Framework for Optimizing Crop Selection via Agronomic and Economic Forecasting2025-07-06First Steps Towards Voice Anonymization for Code-Switching Speech2025-07-02MambAttention: Mamba with Multi-Head Attention for Generalizable Single-Channel Speech Enhancement2025-07-01AUTOMATIC PRONUNCIATION MISTAKE DETECTOR PROJECT REPORT2025-06-25