Wenshuo Chen, Haozhe Jia, Songning Lai, Keming Wu, Hongru Xiao, Lijie Hu, Yutao Yue
Rapid progress in text-to-motion generation has been largely driven by diffusion models. However, existing methods focus solely on temporal modeling, thereby overlooking frequency-domain analysis. We identify two key phases in motion denoising: the **semantic planning stage** and the **fine-grained improving stage**. To address these phases effectively, we propose **Fre**quency **e**nhanced **t**ext-**to**-**m**otion diffusion model (**Free-T2M**), incorporating stage-specific consistency losses that enhance the robustness of static features and improve fine-grained accuracy. Extensive experiments demonstrate the effectiveness of our method. Specifically, on StableMoFusion, our method reduces the FID from **0.189** to **0.051**, establishing a new SOTA performance within the diffusion architecture. These findings highlight the importance of incorporating frequency-domain insights into text-to-motion generation for more precise and robust results.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Pose Tracking | HumanML3D | Diversity | 9.48 | Free-T2M (StableMoFusion) |
| Pose Tracking | HumanML3D | FID | 0.051 | Free-T2M (StableMoFusion) |
| Pose Tracking | HumanML3D | R Precision Top3 | 0.803 | Free-T2M (StableMoFusion) |
| Pose Tracking | KIT Motion-Language | Diversity | 10.902 | Free-T2M (StableMoFusion) |
| Pose Tracking | KIT Motion-Language | FID | 0.155 | Free-T2M (StableMoFusion) |
| Pose Tracking | KIT Motion-Language | R Precision Top3 | 0.789 | Free-T2M (StableMoFusion) |
| Motion Synthesis | HumanML3D | Diversity | 9.48 | Free-T2M (StableMoFusion) |
| Motion Synthesis | HumanML3D | FID | 0.051 | Free-T2M (StableMoFusion) |
| Motion Synthesis | HumanML3D | R Precision Top3 | 0.803 | Free-T2M (StableMoFusion) |
| Motion Synthesis | KIT Motion-Language | Diversity | 10.902 | Free-T2M (StableMoFusion) |
| Motion Synthesis | KIT Motion-Language | FID | 0.155 | Free-T2M (StableMoFusion) |
| Motion Synthesis | KIT Motion-Language | R Precision Top3 | 0.789 | Free-T2M (StableMoFusion) |
| 10-shot image generation | HumanML3D | Diversity | 9.48 | Free-T2M (StableMoFusion) |
| 10-shot image generation | HumanML3D | FID | 0.051 | Free-T2M (StableMoFusion) |
| 10-shot image generation | HumanML3D | R Precision Top3 | 0.803 | Free-T2M (StableMoFusion) |
| 10-shot image generation | KIT Motion-Language | Diversity | 10.902 | Free-T2M (StableMoFusion) |
| 10-shot image generation | KIT Motion-Language | FID | 0.155 | Free-T2M (StableMoFusion) |
| 10-shot image generation | KIT Motion-Language | R Precision Top3 | 0.789 | Free-T2M (StableMoFusion) |
| 3D Human Pose Tracking | HumanML3D | Diversity | 9.48 | Free-T2M (StableMoFusion) |
| 3D Human Pose Tracking | HumanML3D | FID | 0.051 | Free-T2M (StableMoFusion) |
| 3D Human Pose Tracking | HumanML3D | R Precision Top3 | 0.803 | Free-T2M (StableMoFusion) |
| 3D Human Pose Tracking | KIT Motion-Language | Diversity | 10.902 | Free-T2M (StableMoFusion) |
| 3D Human Pose Tracking | KIT Motion-Language | FID | 0.155 | Free-T2M (StableMoFusion) |
| 3D Human Pose Tracking | KIT Motion-Language | R Precision Top3 | 0.789 | Free-T2M (StableMoFusion) |