TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Action-conditioned On-demand Motion Generation

Action-conditioned On-demand Motion Generation

QIUJING LU, YiPeng Zhang, Mingjian Lu, Vwani Roychowdhury

2022-07-17Contrastive LearningMotion GenerationHuman action generation
PaperPDFCode(official)

Abstract

We propose a novel framework, On-Demand MOtion Generation (ODMO), for generating realistic and diverse long-term 3D human motion sequences conditioned only on action types with an additional capability of customization. ODMO shows improvements over SOTA approaches on all traditional motion evaluation metrics when evaluated on three public datasets (HumanAct12, UESTC, and MoCap). Furthermore, we provide both qualitative evaluations and quantitative metrics demonstrating several first-known customization capabilities afforded by our framework, including mode discovery, interpolation, and trajectory customization. These capabilities significantly widen the spectrum of potential applications of such motion generation models. The novel on-demand generative capabilities are enabled by innovations in both the encoder and decoder architectures: (i) Encoder: Utilizing contrastive learning in low-dimensional latent space to create a hierarchical embedding of motion sequences, where not only the codes of different action types form different groups, but within an action type, codes of similar inherent patterns (motion styles) cluster together, making them readily discoverable; (ii) Decoder: Using a hierarchical decoding strategy where the motion trajectory is reconstructed first and then used to reconstruct the whole motion sequence. Such an architecture enables effective trajectory control. Our code is released on the Github page: https://github.com/roychowdhuryresearch/ODMO

Results

TaskDatasetMetricValueModel
Activity RecognitionCMU MocapAccuracy93.51ODMO
Activity RecognitionCMU MocapDiversity6.56ODMO
Activity RecognitionCMU MocapFID34ODMO
Activity RecognitionCMU MocapMultimodality2.49ODMO
Activity RecognitionHumanAct12Accuracy97.81ODMO
Activity RecognitionHumanAct12Diversity0.705ODMO
Activity RecognitionHumanAct12FID0.12ODMO
Activity RecognitionHumanAct12Multimodality2.57ODMO
Activity RecognitionUESTC RGB-DAccuracy93.67ODMO
Activity RecognitionUESTC RGB-DDiversity7.11ODMO
Activity RecognitionUESTC RGB-DFID0.15ODMO
Activity RecognitionUESTC RGB-DTest0.17ODMO
Human action generationCMU MocapAccuracy93.51ODMO
Human action generationCMU MocapDiversity6.56ODMO
Human action generationCMU MocapFID34ODMO
Human action generationCMU MocapMultimodality2.49ODMO
Human action generationHumanAct12Accuracy97.81ODMO
Human action generationHumanAct12Diversity0.705ODMO
Human action generationHumanAct12FID0.12ODMO
Human action generationHumanAct12Multimodality2.57ODMO
Human action generationUESTC RGB-DAccuracy93.67ODMO
Human action generationUESTC RGB-DDiversity7.11ODMO
Human action generationUESTC RGB-DFID0.15ODMO
Human action generationUESTC RGB-DTest0.17ODMO

Related Papers

SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16LLM-Driven Dual-Level Multi-Interest Modeling for Recommendation2025-07-15Latent Space Consistency for Sparse-View CT Reconstruction2025-07-15Self-supervised pretraining of vision transformers for animal behavioral analysis and neural encoding2025-07-13