TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Robust Motion In-betweening

Robust Motion In-betweening

Félix G. Harvey, Mike Yurick, Derek Nowrouzezahrai, Christopher Pal

2021-02-09Human Pose Forecastingmotion predictionMotion Synthesis
PaperPDFCode(official)

Abstract

In this work we present a novel, robust transition generation technique that can serve as a new tool for 3D animators, based on adversarial recurrent neural networks. The system synthesizes high-quality motions that use temporally-sparse keyframes as animation constraints. This is reminiscent of the job of in-betweening in traditional animation pipelines, in which an animator draws motion frames between provided keyframes. We first show that a state-of-the-art motion prediction model cannot be easily converted into a robust transition generator when only adding conditioning information about future keyframes. To solve this problem, we then propose two novel additive embedding modifiers that are applied at each timestep to latent representations encoded inside the network's architecture. One modifier is a time-to-arrival embedding that allows variations of the transition length with a single model. The other is a scheduled target noise vector that allows the system to be robust to target distortions and to sample different transitions given fixed keyframes. To qualitatively evaluate our method, we present a custom MotionBuilder plugin that uses our trained model to perform in-betweening in production scenarios. To quantitatively evaluate performance on transitions and generalizations to longer time horizons, we present well-defined in-betweening benchmarks on a subset of the widely used Human3.6M dataset and on LaFAN1, a novel high quality motion capture dataset that is more appropriate for transition generation. We are releasing this new dataset along with this work, with accompanying code for reproducing our baseline results.

Results

TaskDatasetMetricValueModel
Pose TrackingLaFAN1L2P@150.65TG-complete
Pose TrackingLaFAN1L2P@301.28TG-complete
Pose TrackingLaFAN1L2P@50.23TG-complete
Pose TrackingLaFAN1L2Q@150.42TG-complete
Pose TrackingLaFAN1L2Q@300.69TG-complete
Pose TrackingLaFAN1L2Q@50.17TG-complete
Pose TrackingLaFAN1NPSS@150.0258TG-complete
Pose TrackingLaFAN1NPSS@300.1328TG-complete
Pose TrackingLaFAN1NPSS@50.002TG-complete
Motion SynthesisLaFAN1L2P@150.65TG-complete
Motion SynthesisLaFAN1L2P@301.28TG-complete
Motion SynthesisLaFAN1L2P@50.23TG-complete
Motion SynthesisLaFAN1L2Q@150.42TG-complete
Motion SynthesisLaFAN1L2Q@300.69TG-complete
Motion SynthesisLaFAN1L2Q@50.17TG-complete
Motion SynthesisLaFAN1NPSS@150.0258TG-complete
Motion SynthesisLaFAN1NPSS@300.1328TG-complete
Motion SynthesisLaFAN1NPSS@50.002TG-complete
10-shot image generationLaFAN1L2P@150.65TG-complete
10-shot image generationLaFAN1L2P@301.28TG-complete
10-shot image generationLaFAN1L2P@50.23TG-complete
10-shot image generationLaFAN1L2Q@150.42TG-complete
10-shot image generationLaFAN1L2Q@300.69TG-complete
10-shot image generationLaFAN1L2Q@50.17TG-complete
10-shot image generationLaFAN1NPSS@150.0258TG-complete
10-shot image generationLaFAN1NPSS@300.1328TG-complete
10-shot image generationLaFAN1NPSS@50.002TG-complete
3D Human Pose TrackingLaFAN1L2P@150.65TG-complete
3D Human Pose TrackingLaFAN1L2P@301.28TG-complete
3D Human Pose TrackingLaFAN1L2P@50.23TG-complete
3D Human Pose TrackingLaFAN1L2Q@150.42TG-complete
3D Human Pose TrackingLaFAN1L2Q@300.69TG-complete
3D Human Pose TrackingLaFAN1L2Q@50.17TG-complete
3D Human Pose TrackingLaFAN1NPSS@150.0258TG-complete
3D Human Pose TrackingLaFAN1NPSS@300.1328TG-complete
3D Human Pose TrackingLaFAN1NPSS@50.002TG-complete

Related Papers

Stochastic Human Motion Prediction with Memory of Action Transition and Action Characteristic2025-07-05Temporal Continual Learning with Prior Compensation for Human Motion Prediction2025-07-05DeepGesture: A conversational gesture synthesis system based on emotions and semantics2025-07-03VolumetricSMPL: A Neural Volumetric Body Model for Efficient Interactions, Contacts, and Collisions2025-06-29DuetGen: Music Driven Two-Person Dance Generation via Hierarchical Masked Modeling2025-06-23PlanMoGPT: Flow-Enhanced Progressive Planning for Text to Motion Synthesis2025-06-22AMPLIFY: Actionless Motion Priors for Robot Learning from Videos2025-06-17FocalAD: Local Motion Planning for End-to-End Autonomous Driving2025-06-13