TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Parameter-Efficient Person Re-identification in the 3D Space

Parameter-Efficient Person Re-identification in the 3D Space

Zhedong Zheng, Nenggan Zheng, Yi Yang

2020-06-083D geometryRepresentation LearningPerson Re-IdentificationUnsupervised Person Re-IdentificationUnsupervised Domain Adaptation3D Point Cloud ClassificationPoint Cloud Classification
PaperPDFCode(official)

Abstract

People live in a 3D world. However, existing works on person re-identification (re-id) mostly consider the semantic representation learning in a 2D space, intrinsically limiting the understanding of people. In this work, we address this limitation by exploring the prior knowledge of the 3D body structure. Specifically, we project 2D images to a 3D space and introduce a novel parameter-efficient Omni-scale Graph Network (OG-Net) to learn the pedestrian representation directly from 3D point clouds. OG-Net effectively exploits the local information provided by sparse 3D points and takes advantage of the structure and appearance information in a coherent manner. With the help of 3D geometry information, we can learn a new type of deep re-id feature free from noisy variants, such as scale and viewpoint. To our knowledge, we are among the first attempts to conduct person re-identification in the 3D space. We demonstrate through extensive experiments that the proposed method (1) eases the matching difficulty in the traditional 2D space, (2) exploits the complementary information of 2D appearance and 3D structure, (3) achieves competitive results with limited parameters on four large-scale person re-id datasets, and (4) has good scalability to unseen datasets. Our code, models and generated 3D human data are publicly available at https://github.com/layumi/person-reid-3d .

Results

TaskDatasetMetricValueModel
Domain AdaptationDuke to MSMTmAP1.9OG-Net
Domain AdaptationDuke to MSMTrank-16.8OG-Net
Domain AdaptationMarket to MSMTmAP1.7OG-Net
Domain AdaptationMarket to MSMTrank-15.9OG-Net
Domain AdaptationMarket to DukemAP13.7OG-Net
Domain AdaptationMarket to Dukerank-126.4OG-Net
Domain AdaptationDuke to MarketmAP14.7OGNet
Domain AdaptationDuke to Marketrank-136.4OGNet
Person Re-IdentificationDukeMTMC-reID->Market-1501Rank-141.4OGNet
Person Re-IdentificationDukeMTMC-reID->Market-1501mAP17.2OGNet
Person Re-IdentificationMarket-1501->DukeMTMC-reIDRank-131.3OGNet
Person Re-IdentificationMarket-1501->DukeMTMC-reIDmAP16.3OGNet
Person Re-IdentificationMSMT17Rank-147.71OGNet
Person Re-IdentificationMSMT17mAP23.01OGNet
Person Re-IdentificationMarket-1501Rank-187.74OGNet
Person Re-IdentificationMarket-1501mAP69.52OGNet
Person Re-IdentificationDukeMTMC-reIDRank-176.66OGNet
Person Re-IdentificationDukeMTMC-reIDmAP57.89OGNet
Person Re-IdentificationMSMT17->DukeMTMC-reIDRank-135.3OGNet
Person Re-IdentificationMSMT17->DukeMTMC-reIDmAP19.3OGNet
Person Re-IdentificationMarket-1501->DukeMTMC-reIDRank-126.4OGNet
Person Re-IdentificationMarket-1501->DukeMTMC-reIDmAP13.7OGNet
Person Re-IdentificationDukeMTMC-reID->MSMT17Rank-16.8OGNet
Person Re-IdentificationDukeMTMC-reID->MSMT17mAP1.9OGNet
Person Re-IdentificationDukeMTMC-reID->Market-1501Rank-136.4OGNet
Person Re-IdentificationDukeMTMC-reID->Market-1501mAP14.7OGNet
Person Re-IdentificationMarket-1501->MSMT17Rank-15.9OG-Net
Person Re-IdentificationMarket-1501->MSMT17mAP1.7OG-Net
Person Re-IdentificationMSMT17->Market-1501Rank-140.1OG-Net
Person Re-IdentificationMSMT17->Market-1501mAP17.6OG-Net
Shape Representation Of 3D Point CloudsModelNet40Mean Accuracy90.5OG-Net-Small
Shape Representation Of 3D Point CloudsModelNet40Overall Accuracy93.3OG-Net-Small
3D Point Cloud ClassificationModelNet40Mean Accuracy90.5OG-Net-Small
3D Point Cloud ClassificationModelNet40Overall Accuracy93.3OG-Net-Small
Unsupervised Domain AdaptationDuke to MSMTmAP1.9OG-Net
Unsupervised Domain AdaptationDuke to MSMTrank-16.8OG-Net
Unsupervised Domain AdaptationMarket to MSMTmAP1.7OG-Net
Unsupervised Domain AdaptationMarket to MSMTrank-15.9OG-Net
Unsupervised Domain AdaptationMarket to DukemAP13.7OG-Net
Unsupervised Domain AdaptationMarket to Dukerank-126.4OG-Net
Unsupervised Domain AdaptationDuke to MarketmAP14.7OGNet
Unsupervised Domain AdaptationDuke to Marketrank-136.4OGNet
3D Point Cloud ReconstructionModelNet40Mean Accuracy90.5OG-Net-Small
3D Point Cloud ReconstructionModelNet40Overall Accuracy93.3OG-Net-Small

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Weakly Supervised Visible-Infrared Person Re-Identification via Heterogeneous Expert Collaborative Consistency Learning2025-07-17WhoFi: Deep Person Re-Identification via Wi-Fi Channel Signal Encoding2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16