TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-modality Associative Bridging through Memory: Speech...

Multi-modality Associative Bridging through Memory: Speech Sound Recollected from Face Video

Minsu Kim, Joanna Hong, Se Jin Park, Yong Man Ro

2022-04-04ICCV 2021 10Lip Reading
PaperPDFCode

Abstract

In this paper, we introduce a novel audio-visual multi-modal bridging framework that can utilize both audio and visual information, even with uni-modal inputs. We exploit a memory network that stores source (i.e., visual) and target (i.e., audio) modal representations, where source modal representation is what we are given, and target modal representations are what we want to obtain from the memory network. We then construct an associative bridge between source and target memories that considers the interrelationship between the two memories. By learning the interrelationship through the associative bridge, the proposed bridging framework is able to obtain the target modal representations inside the memory network, even with the source modal input only, and it provides rich information for its downstream tasks. We apply the proposed framework to two tasks: lip reading and speech reconstruction from silent video. Through the proposed associative bridge and modality-specific memories, each task knowledge is enriched with the recalled audio context, achieving state-of-the-art performance. We also verify that the associative bridge properly relates the source and target memories.

Results

TaskDatasetMetricValueModel
LipreadingLip Reading in the WildTop-1 Accuracy85.43D Conv + ResNet-18 + Bi-GRU + Visual-Audio Memory
Natural Language TransductionLip Reading in the WildTop-1 Accuracy85.43D Conv + ResNet-18 + Bi-GRU + Visual-Audio Memory

Related Papers

VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis2025-07-08SwinLip: An Efficient Visual Speech Encoder for Lip Reading Using Swin Transformer2025-05-07Transforming faces into video stories -- VideoFace2.02025-05-04Development and evaluation of a deep learning algorithm for German word recognition from lip movements2025-04-22Chinese-LiPS: A Chinese audio-visual speech recognition dataset with Lip-reading and Presentation Slides2025-04-21VALLR: Visual ASR Language Model for Lip Reading2025-03-27Lend a Hand: Semi Training-Free Cued Speech Recognition via MLLM-Driven Hand Modeling for Barrier-free Communication2025-03-11Integrating Persian Lip Reading in Surena-V Humanoid Robot for Human-Robot Interaction2025-01-23