Wentao Zhu, Xiaoxuan Ma, Zhaoyang Liu, Libin Liu, Wayne Wu, Yizhou Wang
We present a unified perspective on tackling various human-centric video tasks by learning human motion representations from large-scale and heterogeneous data resources. Specifically, we propose a pretraining stage in which a motion encoder is trained to recover the underlying 3D motion from noisy partial 2D observations. The motion representations acquired in this way incorporate geometric, kinematic, and physical knowledge about human motion, which can be easily transferred to multiple downstream tasks. We implement the motion encoder with a Dual-stream Spatio-temporal Transformer (DSTformer) neural network. It could capture long-range spatio-temporal relationships among the skeletal joints comprehensively and adaptively, exemplified by the lowest 3D pose estimation error so far when trained from scratch. Furthermore, our proposed framework achieves state-of-the-art performance on all three downstream tasks by simply finetuning the pretrained motion encoder with a simple regression head (1-2 layers), which demonstrates the versatility of the learned motion representations. Code and models are available at https://motionbert.github.io/
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| 3D Human Pose Estimation | 3DPW | MPJPE | 68.8 | MotionBERT-HybrIK |
| 3D Human Pose Estimation | 3DPW | MPVPE | 79.4 | MotionBERT-HybrIK |
| 3D Human Pose Estimation | 3DPW | PA-MPJPE | 40.6 | MotionBERT-HybrIK |
| 3D Human Pose Estimation | 3DPW | MPJPE | 76.9 | MotionBERT (Finetune) |
| 3D Human Pose Estimation | 3DPW | MPVPE | 88.1 | MotionBERT (Finetune) |
| 3D Human Pose Estimation | 3DPW | PA-MPJPE | 47.2 | MotionBERT (Finetune) |
| 3D Human Pose Estimation | Human3.6M | Average MPJPE (mm) | 37.5 | MotionBERT (Finetune) |
| 3D Human Pose Estimation | Human3.6M | Frames Needed | 243 | MotionBERT (Finetune) |
| 3D Human Pose Estimation | Human3.6M | Average MPJPE (mm) | 39.2 | MotionBERT (Scratch) |
| 3D Human Pose Estimation | Human3.6M | Frames Needed | 243 | MotionBERT (Scratch) |
| Video | NTU RGB+D | Accuracy (CS) | 93 | MotionBert (finetune) |
| Video | NTU RGB+D | Accuracy (CV) | 97.2 | MotionBert (finetune) |
| Temporal Action Localization | NTU RGB+D | Accuracy (CS) | 93 | MotionBert (finetune) |
| Temporal Action Localization | NTU RGB+D | Accuracy (CV) | 97.2 | MotionBert (finetune) |
| Zero-Shot Learning | NTU RGB+D | Accuracy (CS) | 93 | MotionBert (finetune) |
| Zero-Shot Learning | NTU RGB+D | Accuracy (CV) | 97.2 | MotionBert (finetune) |
| Activity Recognition | NTU RGB+D | Accuracy (CS) | 93 | MotionBert (finetune) |
| Activity Recognition | NTU RGB+D | Accuracy (CV) | 97.2 | MotionBert (finetune) |
| Action Localization | NTU RGB+D | Accuracy (CS) | 93 | MotionBert (finetune) |
| Action Localization | NTU RGB+D | Accuracy (CV) | 97.2 | MotionBert (finetune) |
| Pose Estimation | 3DPW | MPJPE | 68.8 | MotionBERT-HybrIK |
| Pose Estimation | 3DPW | MPVPE | 79.4 | MotionBERT-HybrIK |
| Pose Estimation | 3DPW | PA-MPJPE | 40.6 | MotionBERT-HybrIK |
| Pose Estimation | 3DPW | MPJPE | 76.9 | MotionBERT (Finetune) |
| Pose Estimation | 3DPW | MPVPE | 88.1 | MotionBERT (Finetune) |
| Pose Estimation | 3DPW | PA-MPJPE | 47.2 | MotionBERT (Finetune) |
| Pose Estimation | Human3.6M | Average MPJPE (mm) | 37.5 | MotionBERT (Finetune) |
| Pose Estimation | Human3.6M | Frames Needed | 243 | MotionBERT (Finetune) |
| Pose Estimation | Human3.6M | Average MPJPE (mm) | 39.2 | MotionBERT (Scratch) |
| Pose Estimation | Human3.6M | Frames Needed | 243 | MotionBERT (Scratch) |
| Action Detection | NTU RGB+D | Accuracy (CS) | 93 | MotionBert (finetune) |
| Action Detection | NTU RGB+D | Accuracy (CV) | 97.2 | MotionBert (finetune) |
| 3D Action Recognition | NTU RGB+D | Accuracy (CS) | 93 | MotionBert (finetune) |
| 3D Action Recognition | NTU RGB+D | Accuracy (CV) | 97.2 | MotionBert (finetune) |
| 3D | 3DPW | MPJPE | 68.8 | MotionBERT-HybrIK |
| 3D | 3DPW | MPVPE | 79.4 | MotionBERT-HybrIK |
| 3D | 3DPW | PA-MPJPE | 40.6 | MotionBERT-HybrIK |
| 3D | 3DPW | MPJPE | 76.9 | MotionBERT (Finetune) |
| 3D | 3DPW | MPVPE | 88.1 | MotionBERT (Finetune) |
| 3D | 3DPW | PA-MPJPE | 47.2 | MotionBERT (Finetune) |
| 3D | Human3.6M | Average MPJPE (mm) | 37.5 | MotionBERT (Finetune) |
| 3D | Human3.6M | Frames Needed | 243 | MotionBERT (Finetune) |
| 3D | Human3.6M | Average MPJPE (mm) | 39.2 | MotionBERT (Scratch) |
| 3D | Human3.6M | Frames Needed | 243 | MotionBERT (Scratch) |
| Action Recognition | NTU RGB+D | Accuracy (CS) | 93 | MotionBert (finetune) |
| Action Recognition | NTU RGB+D | Accuracy (CV) | 97.2 | MotionBert (finetune) |
| Classification | Full-body Parkinson’s disease dataset | F1-score (weighted) | 0.47 | MotionBERT |
| Classification | Full-body Parkinson’s disease dataset | F1-score (weighted) | 0.43 | MotionBERT-LITE |
| 1 Image, 2*2 Stitchi | 3DPW | MPJPE | 68.8 | MotionBERT-HybrIK |
| 1 Image, 2*2 Stitchi | 3DPW | MPVPE | 79.4 | MotionBERT-HybrIK |
| 1 Image, 2*2 Stitchi | 3DPW | PA-MPJPE | 40.6 | MotionBERT-HybrIK |
| 1 Image, 2*2 Stitchi | 3DPW | MPJPE | 76.9 | MotionBERT (Finetune) |
| 1 Image, 2*2 Stitchi | 3DPW | MPVPE | 88.1 | MotionBERT (Finetune) |
| 1 Image, 2*2 Stitchi | 3DPW | PA-MPJPE | 47.2 | MotionBERT (Finetune) |
| 1 Image, 2*2 Stitchi | Human3.6M | Average MPJPE (mm) | 37.5 | MotionBERT (Finetune) |
| 1 Image, 2*2 Stitchi | Human3.6M | Frames Needed | 243 | MotionBERT (Finetune) |
| 1 Image, 2*2 Stitchi | Human3.6M | Average MPJPE (mm) | 39.2 | MotionBERT (Scratch) |
| 1 Image, 2*2 Stitchi | Human3.6M | Frames Needed | 243 | MotionBERT (Scratch) |