TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Single-Shot Motion Completion with Transformer

Single-Shot Motion Completion with Transformer

Yinglin Duan, Tianyang Shi, Zhengxia Zou, Yenan Lin, Zhehui Qian, Bohan Zhang, Yi Yuan

2021-03-01Motion Synthesis
PaperPDFCode(official)

Abstract

Motion completion is a challenging and long-discussed problem, which is of great significance in film and game applications. For different motion completion scenarios (in-betweening, in-filling, and blending), most previous methods deal with the completion problems with case-by-case designs. In this work, we propose a simple but effective method to solve multiple motion completion problems under a unified framework and achieves a new state of the art accuracy under multiple evaluation settings. Inspired by the recent great success of attention-based models, we consider the completion as a sequence to sequence prediction problem. Our method consists of two modules - a standard transformer encoder with self-attention that learns long-range dependencies of input motions, and a trainable mixture embedding module that models temporal information and discriminates key-frames. Our method can run in a non-autoregressive manner and predict multiple missing frames within a single forward propagation in real time. We finally show the effectiveness of our method in music-dance applications.

Results

TaskDatasetMetricValueModel
Pose TrackingLaFAN1L2P@150.56SSMCT
Pose TrackingLaFAN1L2P@301.1SSMCT
Pose TrackingLaFAN1L2P@50.22SSMCT
Pose TrackingLaFAN1L2Q@150.36SSMCT
Pose TrackingLaFAN1L2Q@300.61SSMCT
Pose TrackingLaFAN1L2Q@50.14SSMCT
Pose TrackingLaFAN1NPSS@150.0234SSMCT
Pose TrackingLaFAN1NPSS@300.1222SSMCT
Pose TrackingLaFAN1NPSS@50.0016SSMCT
Motion SynthesisLaFAN1L2P@150.56SSMCT
Motion SynthesisLaFAN1L2P@301.1SSMCT
Motion SynthesisLaFAN1L2P@50.22SSMCT
Motion SynthesisLaFAN1L2Q@150.36SSMCT
Motion SynthesisLaFAN1L2Q@300.61SSMCT
Motion SynthesisLaFAN1L2Q@50.14SSMCT
Motion SynthesisLaFAN1NPSS@150.0234SSMCT
Motion SynthesisLaFAN1NPSS@300.1222SSMCT
Motion SynthesisLaFAN1NPSS@50.0016SSMCT
10-shot image generationLaFAN1L2P@150.56SSMCT
10-shot image generationLaFAN1L2P@301.1SSMCT
10-shot image generationLaFAN1L2P@50.22SSMCT
10-shot image generationLaFAN1L2Q@150.36SSMCT
10-shot image generationLaFAN1L2Q@300.61SSMCT
10-shot image generationLaFAN1L2Q@50.14SSMCT
10-shot image generationLaFAN1NPSS@150.0234SSMCT
10-shot image generationLaFAN1NPSS@300.1222SSMCT
10-shot image generationLaFAN1NPSS@50.0016SSMCT
3D Human Pose TrackingLaFAN1L2P@150.56SSMCT
3D Human Pose TrackingLaFAN1L2P@301.1SSMCT
3D Human Pose TrackingLaFAN1L2P@50.22SSMCT
3D Human Pose TrackingLaFAN1L2Q@150.36SSMCT
3D Human Pose TrackingLaFAN1L2Q@300.61SSMCT
3D Human Pose TrackingLaFAN1L2Q@50.14SSMCT
3D Human Pose TrackingLaFAN1NPSS@150.0234SSMCT
3D Human Pose TrackingLaFAN1NPSS@300.1222SSMCT
3D Human Pose TrackingLaFAN1NPSS@50.0016SSMCT

Related Papers

DeepGesture: A conversational gesture synthesis system based on emotions and semantics2025-07-03VolumetricSMPL: A Neural Volumetric Body Model for Efficient Interactions, Contacts, and Collisions2025-06-29DuetGen: Music Driven Two-Person Dance Generation via Hierarchical Masked Modeling2025-06-23PlanMoGPT: Flow-Enhanced Progressive Planning for Text to Motion Synthesis2025-06-22Motion-R1: Chain-of-Thought Reasoning and Reinforcement Learning for Human Motion Generation2025-06-12DanceChat: Large Language Model-Guided Music-to-Dance Generation2025-06-12MotionRAG-Diff: A Retrieval-Augmented Diffusion Framework for Long-Term Music-to-Dance Generation2025-06-03MotionPro: A Precise Motion Controller for Image-to-Video Generation2025-05-26