TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/3D Magic Mirror: Clothing Reconstruction from a Single Ima...

3D Magic Mirror: Clothing Reconstruction from a Single Image via a Causal Perspective

Zhedong Zheng, Jiayin Zhu, Wei Ji, Yi Yang, Tat-Seng Chua

2022-04-27Self-Supervised Learning3D ReconstructionPerson Re-IdentificationSingle-View 3D Reconstruction
PaperPDFCode(official)

Abstract

This research aims to study a self-supervised 3D clothing reconstruction method, which recovers the geometry shape and texture of human clothing from a single image. Compared with existing methods, we observe that three primary challenges remain: (1) 3D ground-truth meshes of clothing are usually inaccessible due to annotation difficulties and time costs; (2) Conventional template-based methods are limited to modeling non-rigid objects, e.g., handbags and dresses, which are common in fashion images; (3) The inherent ambiguity compromises the model training, such as the dilemma between a large shape with a remote camera or a small shape with a close camera. In an attempt to address the above limitations, we propose a causality-aware self-supervised learning method to adaptively reconstruct 3D non-rigid objects from 2D images without 3D annotations. In particular, to solve the inherent ambiguity among four implicit variables, i.e., camera position, shape, texture, and illumination, we introduce an explainable structural causal map (SCM) to build our model. The proposed model structure follows the spirit of the causal map, which explicitly considers the prior template in the camera estimation and shape prediction. When optimization, the causality intervention tool, i.e., two expectation-maximization loops, is deeply embedded in our algorithm to (1) disentangle four encoders and (2) facilitate the prior template. Extensive experiments on two 2D fashion benchmarks (ATR and Market-HQ) show that the proposed method could yield high-fidelity 3D reconstruction. Furthermore, we also verify the scalability of the proposed method on a fine-grained bird dataset, i.e., CUB. The code is available at https://github.com/layumi/ 3D-Magic-Mirror .

Results

TaskDatasetMetricValueModel
ReconstructionCUB-200-2011FID63.53D Magic Mirror
ReconstructionATRFID66.83D Magic Mirror
ReconstructionMarket-HQFID46.73D Magic Mirror
Person Re-IdentificationMarket-1501Rank-195.433DMagicMirror (HRNet)
Person Re-IdentificationMarket-1501mAP88.543DMagicMirror (HRNet)
Person Re-IdentificationMarket-1501Rank-195.073DMagicMirror (ResNet-ibn)
Person Re-IdentificationMarket-1501mAP87.83DMagicMirror (ResNet-ibn)
3DCUB-200-2011FID63.53D Magic Mirror
3DATRFID66.83D Magic Mirror
3DMarket-HQFID46.73D Magic Mirror
Single-View 3D ReconstructionCUB-200-2011FID63.53D Magic Mirror
Single-View 3D ReconstructionATRFID66.83D Magic Mirror
Single-View 3D ReconstructionMarket-HQFID46.73D Magic Mirror

Related Papers

A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17AutoPartGen: Autogressive 3D Part Generation and Discovery2025-07-17Weakly Supervised Visible-Infrared Person Re-Identification via Heterogeneous Expert Collaborative Consistency Learning2025-07-17WhoFi: Deep Person Re-Identification via Wi-Fi Channel Signal Encoding2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16BRUM: Robust 3D Vehicle Reconstruction from 360 Sparse Images2025-07-16Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation2025-07-15Try Harder: Hard Sample Generation and Learning for Clothes-Changing Person Re-ID2025-07-15