TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Distinguishing Homophenes Using Multi-Head Visual-Audio Me...

Distinguishing Homophenes Using Multi-Head Visual-Audio Memory for Lip Reading

Minsu Kim, Jeong Hun Yeo, Yong Man Ro

2022-04-04The AAAI Conference on Artificial Intelligence (AAAI) 2022 3LipreadingLip Reading
PaperPDFCode

Abstract

Recognizing speech from silent lip movement, which is called lip reading, is a challenging task due to 1) the inherent information insufficiency of lip movement to fully represent the speech, and 2) the existence of homophenes that have similar lip movement with different pronunciations. In this paper, we try to alleviate the aforementioned two challenges in lip reading by proposing a Multi-head Visual-audio Memory (MVM). Firstly, MVM is trained with audio-visual datasets and remembers audio representations by modelling the inter-relationships of paired audio-visual representations. At the inference stage, visual input alone can extract the saved audio representation from the memory by examining the learned inter-relationships. Therefore, the lip reading model can complement the insufficient visual information with the extracted audio representations. Secondly, MVM is composed of multi-head key memories for saving visual features and one value memory for saving audio knowledge, which is designed to distinguish the homophenes. With the multi-head key memories, MVM extracts possible candidate audio features from the memory, which allows the lip reading model to consider the possibility of which pronunciations can be represented from the input lip movement. This also can be viewed as an explicit implementation of the one-to-many mapping of viseme-to-phoneme. Moreover, MVM is employed in multi-temporal levels to consider the context when retrieving the memory and distinguish the homophenes. Extensive experimental results verify the effectiveness of the proposed method in lip reading and in distinguishing the homophenes.

Results

TaskDatasetMetricValueModel
LipreadingCAS-VSR-W1k (LRW-1000)Top-1 Accuracy53.83D Conv + ResNet-18 + MS-TCN + Multi-Head Visual-Audio Memory
LipreadingLRS2Word Error Rate (WER)44.5Multi-head Visual-Audio Memory
LipreadingLip Reading in the WildTop-1 Accuracy88.53D Conv + ResNet-18 + MS-TCN + Multi-Head Visual-Audio Memory
Natural Language TransductionCAS-VSR-W1k (LRW-1000)Top-1 Accuracy53.83D Conv + ResNet-18 + MS-TCN + Multi-Head Visual-Audio Memory
Natural Language TransductionLRS2Word Error Rate (WER)44.5Multi-head Visual-Audio Memory
Natural Language TransductionLip Reading in the WildTop-1 Accuracy88.53D Conv + ResNet-18 + MS-TCN + Multi-Head Visual-Audio Memory

Related Papers

VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis2025-07-08Learning Speaker-Invariant Visual Features for Lipreading2025-06-09UniCUE: Unified Recognition and Generation Framework for Chinese Cued Speech Video-to-Speech Generation2025-06-04OXSeg: Multidimensional attention UNet-based lip segmentation using semi-supervised lip contours2025-05-08SwinLip: An Efficient Visual Speech Encoder for Lip Reading Using Swin Transformer2025-05-07Transforming faces into video stories -- VideoFace2.02025-05-04Development and evaluation of a deep learning algorithm for German word recognition from lip movements2025-04-22Chinese-LiPS: A Chinese audio-visual speech recognition dataset with Lip-reading and Presentation Slides2025-04-21