TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Datasets/HumanML3D

HumanML3D

3DTextsIntroduced 2022-01-01

HumanML3D is a 3D human motion-language dataset that originates from a combination of HumanAct12 and Amass dataset. It covers a broad range of human actions such as daily activities (e.g., 'walking', 'jumping'), sports (e.g., 'swimming', 'playing golf'), acrobatics (e.g., 'cartwheel') and artistry (e.g., 'dancing'). Overall, HumanML3D dataset consists of 14,616 motions and 44,970 descriptions composed by 5,371 distinct words. The total length of motions amounts to 28.59 hours. The average motion length is 7.1 seconds, while average description length is 12 words.

Benchmarks

10-shot image generation/FID10-shot image generation/R Precision Top310-shot image generation/Diversity10-shot image generation/Multimodality3D Human Pose Tracking/FID3D Human Pose Tracking/R Precision Top33D Human Pose Tracking/Diversity3D Human Pose Tracking/MultimodalityMotion Captioning/BLEU-4Motion Captioning/BERTScoreMotion Synthesis/FIDMotion Synthesis/R Precision Top3Motion Synthesis/DiversityMotion Synthesis/MultimodalityPose Tracking/FIDPose Tracking/R Precision Top3Pose Tracking/DiversityPose Tracking/Multimodality

Statistics

Papers
201
Benchmarks
18

Links

Homepage

Tasks

10-shot image generation3D Human Pose TrackingMotion CaptioningMotion GenerationMotion SynthesisPose Tracking