TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Mutual Information Maximization for Effective Lip Reading

Mutual Information Maximization for Effective Lip Reading

Xing Zhao, Shuang Yang, Shiguang Shan, Xilin Chen

2020-03-13LipreadingLip Reading
PaperPDFCode(official)

Abstract

Lip reading has received an increasing research interest in recent years due to the rapid development of deep learning and its widespread potential applications. One key point to obtain good performance for the lip reading task depends heavily on how effective the representation can be to capture the lip movement information and meanwhile to resist the noises resulted from the change of pose, lighting conditions, speaker's appearance and so on. Towards this target, we propose to introduce the mutual information constraints on both the local feature's level and the global sequence's level to enhance the relations of the features with the speech content. On the one hand, we constraint the features generated at each time step to enable them carry a strong relation with the speech content by imposing the local mutual information maximization constraint (LMIM), leading to improvements over the model's ability to discover fine-grained lip movements and the fine-grained differences among words with similar pronunciation, such as ``spend'' and ``spending''. On the other hand, we introduce the mutual information maximization constraint on the global sequence's level (GMIM), to make the model be able to pay more attention to discriminate key frames related with the speech content, and less to various noises appeared in the speaking process. By combining these two advantages together, the proposed method is expected to be both discriminative and robust for effective lip reading. To verify this method, we evaluate on two large-scale benchmark. We perform a detailed analysis and comparison on several aspects, including the comparison of the LMIM and GMIM with the baseline, the visualization of the learned representation and so on. The results not only prove the effectiveness of the proposed method but also report new state-of-the-art performance on both the two benchmarks.

Results

TaskDatasetMetricValueModel
LipreadingLip Reading in the WildTop-1 Accuracy84.413D Conv + ResNet-18 + Bi-GRU
Natural Language TransductionLip Reading in the WildTop-1 Accuracy84.413D Conv + ResNet-18 + Bi-GRU

Related Papers

VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis2025-07-08Learning Speaker-Invariant Visual Features for Lipreading2025-06-09UniCUE: Unified Recognition and Generation Framework for Chinese Cued Speech Video-to-Speech Generation2025-06-04OXSeg: Multidimensional attention UNet-based lip segmentation using semi-supervised lip contours2025-05-08SwinLip: An Efficient Visual Speech Encoder for Lip Reading Using Swin Transformer2025-05-07Transforming faces into video stories -- VideoFace2.02025-05-04Development and evaluation of a deep learning algorithm for German word recognition from lip movements2025-04-22Chinese-LiPS: A Chinese audio-visual speech recognition dataset with Lip-reading and Presentation Slides2025-04-21