TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Leveraging Unimodal Self-Supervised Learning for Multimoda...

Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition

Xichen Pan, Peiyu Chen, Yichen Gong, Helong Zhou, Xinbing Wang, Zhouhan Lin

2022-02-24ACL 2022 5Speech RecognitionAutomatic Speech Recognition (ASR)speech-recognitionSelf-Supervised LearningAudio-Visual Speech RecognitionVisual Speech RecognitionLipreadingLip ReadingLanguage Modelling
PaperPDFCode(official)

Abstract

Training Transformer-based models demands a large amount of data, while obtaining aligned and labelled data in multimodality is rather cost-demanding, especially for audio-visual speech recognition (AVSR). Thus it makes a lot of sense to make use of unlabelled unimodal data. On the other side, although the effectiveness of large-scale self-supervised learning is well established in both audio and visual modalities, how to integrate those pre-trained models into a multimodal scenario remains underexplored. In this work, we successfully leverage unimodal self-supervised learning to promote the multimodal AVSR. In particular, audio and visual front-ends are trained on large-scale unimodal datasets, then we integrate components of both front-ends into a larger multimodal framework which learns to recognize parallel audio-visual data into characters through a combination of CTC and seq2seq decoding. We show that both components inherited from unimodal self-supervised learning cooperate well, resulting in that the multimodal framework yields competitive results through fine-tuning. Our model is experimentally validated on both word-level and sentence-level tasks. Especially, even without an external language model, our proposed model raises the state-of-the-art performances on the widely accepted Lip Reading Sentences 2 (LRS2) dataset by a large margin, with a relative improvement of 30%.

Results

TaskDatasetMetricValueModel
Speech RecognitionLRS2Test WER2.7MoCo + wav2vec (w/o extLM)
Audio-Visual Speech RecognitionLRS2Test WER2.6MoCo + wav2vec (w/o extLM)
LipreadingLRS2Word Error Rate (WER)43.2MoCo + wav2vec (w/o extLM)
LipreadingLip Reading in the WildTop-1 Accuracy85MoCo + Wav2Vec by SJTU LUMIA
Natural Language TransductionLRS2Word Error Rate (WER)43.2MoCo + wav2vec (w/o extLM)
Natural Language TransductionLip Reading in the WildTop-1 Accuracy85MoCo + Wav2Vec by SJTU LUMIA
Automatic Speech Recognition (ASR)LRS2Test WER2.7MoCo + wav2vec (w/o extLM)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17