TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning 3D Human Shape and Pose from Dense Body Parts

Learning 3D Human Shape and Pose from Dense Body Parts

Hongwen Zhang, Jie Cao, Guo Lu, Wanli Ouyang, Zhenan Sun

2019-12-313D Human Pose Estimation3D ReconstructionHuman Mesh Recovery3D human pose and shape estimation3D Human Reconstruction
PaperPDFCode(official)

Abstract

Reconstructing 3D human shape and pose from monocular images is challenging despite the promising results achieved by the most recent learning-based methods. The commonly occurred misalignment comes from the facts that the mapping from images to the model space is highly non-linear and the rotation-based pose representation of body models is prone to result in the drift of joint positions. In this work, we investigate learning 3D human shape and pose from dense correspondences of body parts and propose a Decompose-and-aggregate Network (DaNet) to address these issues. DaNet adopts the dense correspondence maps, which densely build a bridge between 2D pixels and 3D vertices, as intermediate representations to facilitate the learning of 2D-to-3D mapping. The prediction modules of DaNet are decomposed into one global stream and multiple local streams to enable global and fine-grained perceptions for the shape and pose predictions, respectively. Messages from local streams are further aggregated to enhance the robust prediction of the rotation-based poses, where a position-aided rotation feature refinement strategy is proposed to exploit spatial relationships between body joints. Moreover, a Part-based Dropout (PartDrop) strategy is introduced to drop out dense information from intermediate representations during training, encouraging the network to focus on more complementary body parts as well as neighboring position features. The efficacy of the proposed method is validated on both indoor and real-world datasets including Human3.6M, UP3D, COCO, and 3DPW, showing that our method could significantly improve the reconstruction performance in comparison with previous state-of-the-art methods. Our code is publicly available at https://hongwenzhang.github.io/dense2mesh .

Results

TaskDatasetMetricValueModel
3D Human Pose Estimation3DPWMPJPE85.5DaNet-DensePose2SMPL
3D Human Pose Estimation3DPWPA-MPJPE54.8DaNet-DensePose2SMPL
Pose Estimation3DPWMPJPE85.5DaNet-DensePose2SMPL
Pose Estimation3DPWPA-MPJPE54.8DaNet-DensePose2SMPL
3D3DPWMPJPE85.5DaNet-DensePose2SMPL
3D3DPWPA-MPJPE54.8DaNet-DensePose2SMPL
1 Image, 2*2 Stitchi3DPWMPJPE85.5DaNet-DensePose2SMPL
1 Image, 2*2 Stitchi3DPWPA-MPJPE54.8DaNet-DensePose2SMPL

Related Papers

AutoPartGen: Autogressive 3D Part Generation and Discovery2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16BRUM: Robust 3D Vehicle Reconstruction from 360 Sparse Images2025-07-16Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation2025-07-15Binomial Self-Compensation: Mechanism and Suppression of Motion Error in Phase-Shifting Profilometry2025-07-14An Efficient Approach for Muscle Segmentation and 3D Reconstruction Using Keypoint Tracking in MRI Scan2025-07-11Review of Feed-forward 3D Reconstruction: From DUSt3R to VGGT2025-07-11DreamGrasp: Zero-Shot 3D Multi-Object Reconstruction from Partial-View Images for Robotic Manipulation2025-07-08