Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, Ziwei Liu
Human motion modeling is important for many modern graphics applications, which typically require professional skills. In order to remove the skill barriers for laymen, recent motion generation methods can directly generate human motions conditioned on natural languages. However, it remains challenging to achieve diverse and fine-grained motion generation with various text inputs. To address this problem, we propose MotionDiffuse, the first diffusion model-based text-driven motion generation framework, which demonstrates several desired properties over existing methods. 1) Probabilistic Mapping. Instead of a deterministic language-motion mapping, MotionDiffuse generates motions through a series of denoising steps in which variations are injected. 2) Realistic Synthesis. MotionDiffuse excels at modeling complicated data distribution and generating vivid motion sequences. 3) Multi-Level Manipulation. MotionDiffuse responds to fine-grained instructions on body parts, and arbitrary-length motion synthesis with time-varied text prompts. Our experiments show MotionDiffuse outperforms existing SoTA methods by convincing margins on text-driven motion generation and action-conditioned motion generation. A qualitative analysis further demonstrates MotionDiffuse's controllability for comprehensive motion generation. Homepage: https://mingyuan-zhang.github.io/projects/MotionDiffuse.html
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Pose Tracking | HumanML3D | Diversity | 9.41 | MotionDiffuse |
| Pose Tracking | HumanML3D | FID | 0.63 | MotionDiffuse |
| Pose Tracking | HumanML3D | Multimodality | 1.553 | MotionDiffuse |
| Pose Tracking | HumanML3D | R Precision Top3 | 0.782 | MotionDiffuse |
| Pose Tracking | KIT Motion-Language | Diversity | 11.1 | MotionDiffuse |
| Pose Tracking | KIT Motion-Language | FID | 1.954 | MotionDiffuse |
| Pose Tracking | KIT Motion-Language | Multimodality | 0.73 | MotionDiffuse |
| Pose Tracking | KIT Motion-Language | R Precision Top3 | 0.739 | MotionDiffuse |
| Motion Synthesis | HumanML3D | Diversity | 9.41 | MotionDiffuse |
| Motion Synthesis | HumanML3D | FID | 0.63 | MotionDiffuse |
| Motion Synthesis | HumanML3D | Multimodality | 1.553 | MotionDiffuse |
| Motion Synthesis | HumanML3D | R Precision Top3 | 0.782 | MotionDiffuse |
| Motion Synthesis | KIT Motion-Language | Diversity | 11.1 | MotionDiffuse |
| Motion Synthesis | KIT Motion-Language | FID | 1.954 | MotionDiffuse |
| Motion Synthesis | KIT Motion-Language | Multimodality | 0.73 | MotionDiffuse |
| Motion Synthesis | KIT Motion-Language | R Precision Top3 | 0.739 | MotionDiffuse |
| 10-shot image generation | HumanML3D | Diversity | 9.41 | MotionDiffuse |
| 10-shot image generation | HumanML3D | FID | 0.63 | MotionDiffuse |
| 10-shot image generation | HumanML3D | Multimodality | 1.553 | MotionDiffuse |
| 10-shot image generation | HumanML3D | R Precision Top3 | 0.782 | MotionDiffuse |
| 10-shot image generation | KIT Motion-Language | Diversity | 11.1 | MotionDiffuse |
| 10-shot image generation | KIT Motion-Language | FID | 1.954 | MotionDiffuse |
| 10-shot image generation | KIT Motion-Language | Multimodality | 0.73 | MotionDiffuse |
| 10-shot image generation | KIT Motion-Language | R Precision Top3 | 0.739 | MotionDiffuse |
| 3D Human Pose Tracking | HumanML3D | Diversity | 9.41 | MotionDiffuse |
| 3D Human Pose Tracking | HumanML3D | FID | 0.63 | MotionDiffuse |
| 3D Human Pose Tracking | HumanML3D | Multimodality | 1.553 | MotionDiffuse |
| 3D Human Pose Tracking | HumanML3D | R Precision Top3 | 0.782 | MotionDiffuse |
| 3D Human Pose Tracking | KIT Motion-Language | Diversity | 11.1 | MotionDiffuse |
| 3D Human Pose Tracking | KIT Motion-Language | FID | 1.954 | MotionDiffuse |
| 3D Human Pose Tracking | KIT Motion-Language | Multimodality | 0.73 | MotionDiffuse |
| 3D Human Pose Tracking | KIT Motion-Language | R Precision Top3 | 0.739 | MotionDiffuse |