TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A High-Accuracy Unsupervised Person Re-identification Meth...

A High-Accuracy Unsupervised Person Re-identification Method Using Auxiliary Information Mined from Datasets

Hehan Teng, Tao He, Yuchen Guo, Guiguang Ding

2022-05-06Person Re-IdentificationUnsupervised Person Re-IdentificationSTS
PaperPDFCode(official)

Abstract

Supervised person re-identification methods rely heavily on high-quality cross-camera training label. This significantly hinders the deployment of re-ID models in real-world applications. The unsupervised person re-ID methods can reduce the cost of data annotation, but their performance is still far lower than the supervised ones. In this paper, we make full use of the auxiliary information mined from the datasets for multi-modal feature learning, including camera information, temporal information and spatial information. By analyzing the style bias of cameras, the characteristics of pedestrians' motion trajectories and the positions of camera network, this paper designs three modules: Time-Overlapping Constraint (TOC), Spatio-Temporal Similarity (STS) and Same-Camera Penalty (SCP) to exploit the auxiliary information. Auxiliary information can improve the model performance and inference accuracy by constructing association constraints or fusing with visual features. In addition, this paper proposes three effective training tricks, including Restricted Label Smoothing Cross Entropy Loss (RLSCE), Weight Adaptive Triplet Loss (WATL) and Dynamic Training Iterations (DTI). The tricks achieve mAP of 72.4% and 81.1% on MARS and DukeMTMC-VideoReID, respectively. Combined with auxiliary information exploiting modules, our methods achieve mAP of 89.9% on DukeMTMC, where TOC, STS and SCP all contributed considerable performance improvements. The method proposed by this paper outperforms most existing unsupervised re-ID methods and narrows the gap between unsupervised and supervised re-ID methods. Our code is at https://github.com/tenghehan/AuxUSLReID.

Results

TaskDatasetMetricValueModel
Person Re-IdentificationMARSmAP72.4AuxUSLReID
Person Re-IdentificationMARSrank-180.9AuxUSLReID
Person Re-IdentificationDukeMTMC-VideoReIDRank-191.9AuxUSLReID
Person Re-IdentificationDukeMTMC-VideoReIDmAP89.9AuxUSLReID

Related Papers

Weakly Supervised Visible-Infrared Person Re-Identification via Heterogeneous Expert Collaborative Consistency Learning2025-07-17WhoFi: Deep Person Re-Identification via Wi-Fi Channel Signal Encoding2025-07-17Try Harder: Hard Sample Generation and Learning for Clothes-Changing Person Re-ID2025-07-15Mind the Gap: Bridging Occlusion in Gait Recognition via Residual Gap Correction2025-07-15KeyRe-ID: Keypoint-Guided Person Re-Identification using Part-Aware Representation in Videos2025-07-10CORE-ReID V2: Advancing the Domain Adaptation for Object Re-Identification with Optimized Training and Ensemble Fusion2025-07-04Following the Clues: Experiments on Person Re-ID using Cross-Modal Intelligence2025-07-02DALR: Dual-level Alignment Learning for Multimodal Sentence Representation Learning2025-06-26