TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/EMAGE: Towards Unified Holistic Co-Speech Gesture Generati...

EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling

Haiyang Liu, Zihao Zhu, Giorgio Becherini, Yichen Peng, Mingyang Su, You Zhou, Xuefei Zhe, Naoya Iwamoto, Bo Zheng, Michael J. Black

2023-12-31CVPR 2024 13D Face AnimationGesture GenerationRhythm
PaperPDFCode(official)

Abstract

We propose EMAGE, a framework to generate full-body human gestures from audio and masked gestures, encompassing facial, local body, hands, and global movements. To achieve this, we first introduce BEAT2 (BEAT-SMPLX-FLAME), a new mesh-level holistic co-speech dataset. BEAT2 combines a MoShed SMPL-X body with FLAME head parameters and further refines the modeling of head, neck, and finger movements, offering a community-standardized, high-quality 3D motion captured dataset. EMAGE leverages masked body gesture priors during training to boost inference performance. It involves a Masked Audio Gesture Transformer, facilitating joint training on audio-to-gesture generation and masked gesture reconstruction to effectively encode audio and body gesture hints. Encoded body hints from masked gestures are then separately employed to generate facial and body movements. Moreover, EMAGE adaptively merges speech features from the audio's rhythm and content and utilizes four compositional VQ-VAEs to enhance the results' fidelity and diversity. Experiments demonstrate that EMAGE generates holistic gestures with state-of-the-art performance and is flexible in accepting predefined spatial-temporal gesture inputs, generating complete, audio-synchronized results. Our code and dataset are available https://pantomatrix.github.io/EMAGE/

Results

TaskDatasetMetricValueModel
3DBEAT2FGD0.5512EMAGE
3D Shape GenerationBEAT2FGD0.5512EMAGE

Related Papers

DeepGesture: A conversational gesture synthesis system based on emotions and semantics2025-07-03Exploring Adapter Design Tradeoffs for Low Resource Music Generation2025-06-26CBF-AFA: Chunk-Based Multi-SSL Fusion for Automatic Fluency Assessment2025-06-25Let Your Video Listen to Your Music!2025-06-23From Generality to Mastery: Composer-Style Symbolic Music Generation via Large-Scale Pre-training2025-06-20DanceChat: Large Language Model-Guided Music-to-Dance Generation2025-06-12Rhythm Features for Speaker Identification2025-06-07MMSU: A Massive Multi-task Spoken Language Understanding and Reasoning Benchmark2025-06-05