TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Harmonious Group Choreography with Trajectory-Controllable...

Harmonious Group Choreography with Trajectory-Controllable Diffusion

Yuqin Dai, Wanlu Zhu, Ronghui Li, Zeping Ren, Xiangzheng Zhou, Xiu Li, Jun Li, Jian Yang

2024-03-10Motion Synthesis
PaperPDF

Abstract

Creating group choreography from music has gained attention in cultural entertainment and virtual reality, aiming to coordinate visually cohesive and diverse group movements. Despite increasing interest, recent works face challenges in achieving aesthetically appealing choreography, primarily for two key issues: multi-dancer collision and single-dancer foot slide. To address these issues, we propose a Trajectory-Controllable Diffusion (TCDiff), a novel approach that harnesses non-overlapping trajectories to facilitate coherent dance movements. Specifically, to tackle dancer collisions, we introduce a Dance-Beat Navigator capable of generating trajectories for multiple dancers based on the music, complemented by a Distance-Consistency loss to maintain appropriate spacing among trajectories within a reasonable threshold. To mitigate foot sliding, we present a Footwork Adaptor that utilizes trajectory displacement from adjacent frames to enable flexible footwork, coupled with a Relative Forward-Kinematic loss to adjust the positioning of individual dancers' root nodes and joints. Extensive experiments demonstrate that our method achieves state-of-the-art results.

Results

TaskDatasetMetricValueModel
Pose TrackingAIOZ-GDANCEFID42.39TCDiff
Pose TrackingAIOZ-GDANCEGMC81.48TCDiff
Pose TrackingAIOZ-GDANCEGMR21.12TCDiff
Pose TrackingAIOZ-GDANCEGenDiv14.37TCDiff
Pose TrackingAIOZ-GDANCEMMC0.25TCDiff
Pose TrackingAIOZ-GDANCEPFC0.54TCDiff
Pose TrackingAIOZ-GDANCETIF0.15TCDiff
Motion SynthesisAIOZ-GDANCEFID42.39TCDiff
Motion SynthesisAIOZ-GDANCEGMC81.48TCDiff
Motion SynthesisAIOZ-GDANCEGMR21.12TCDiff
Motion SynthesisAIOZ-GDANCEGenDiv14.37TCDiff
Motion SynthesisAIOZ-GDANCEMMC0.25TCDiff
Motion SynthesisAIOZ-GDANCEPFC0.54TCDiff
Motion SynthesisAIOZ-GDANCETIF0.15TCDiff
10-shot image generationAIOZ-GDANCEFID42.39TCDiff
10-shot image generationAIOZ-GDANCEGMC81.48TCDiff
10-shot image generationAIOZ-GDANCEGMR21.12TCDiff
10-shot image generationAIOZ-GDANCEGenDiv14.37TCDiff
10-shot image generationAIOZ-GDANCEMMC0.25TCDiff
10-shot image generationAIOZ-GDANCEPFC0.54TCDiff
10-shot image generationAIOZ-GDANCETIF0.15TCDiff
3D Human Pose TrackingAIOZ-GDANCEFID42.39TCDiff
3D Human Pose TrackingAIOZ-GDANCEGMC81.48TCDiff
3D Human Pose TrackingAIOZ-GDANCEGMR21.12TCDiff
3D Human Pose TrackingAIOZ-GDANCEGenDiv14.37TCDiff
3D Human Pose TrackingAIOZ-GDANCEMMC0.25TCDiff
3D Human Pose TrackingAIOZ-GDANCEPFC0.54TCDiff
3D Human Pose TrackingAIOZ-GDANCETIF0.15TCDiff

Related Papers

DeepGesture: A conversational gesture synthesis system based on emotions and semantics2025-07-03VolumetricSMPL: A Neural Volumetric Body Model for Efficient Interactions, Contacts, and Collisions2025-06-29DuetGen: Music Driven Two-Person Dance Generation via Hierarchical Masked Modeling2025-06-23PlanMoGPT: Flow-Enhanced Progressive Planning for Text to Motion Synthesis2025-06-22Motion-R1: Chain-of-Thought Reasoning and Reinforcement Learning for Human Motion Generation2025-06-12DanceChat: Large Language Model-Guided Music-to-Dance Generation2025-06-12MotionRAG-Diff: A Retrieval-Augmented Diffusion Framework for Long-Term Music-to-Dance Generation2025-06-03MotionPro: A Precise Motion Controller for Image-to-Video Generation2025-05-26