TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Style Normalization and Restitution for Generalizable Pers...

Style Normalization and Restitution for Generalizable Person Re-identification

Xin Jin, Cuiling Lan, Wen-Jun Zeng, Zhibo Chen, Li Zhang

2020-05-22CVPR 2020 6DisentanglementDomain GeneralizationPerson Re-IdentificationGeneralizable Person Re-identificationUnsupervised Domain AdaptationDomain Adaptation
PaperPDFCode(official)

Abstract

Existing fully-supervised person re-identification (ReID) methods usually suffer from poor generalization capability caused by domain gaps. The key to solving this problem lies in filtering out identity-irrelevant interference and learning domain-invariant person representations. In this paper, we aim to design a generalizable person ReID framework which trains a model on source domains yet is able to generalize/perform well on target domains. To achieve this goal, we propose a simple yet effective Style Normalization and Restitution (SNR) module. Specifically, we filter out style variations (e.g., illumination, color contrast) by Instance Normalization (IN). However, such a process inevitably removes discriminative information. We propose to distill identity-relevant feature from the removed information and restitute it to the network to ensure high discrimination. For better disentanglement, we enforce a dual causal loss constraint in SNR to encourage the separation of identity-relevant features and identity-irrelevant features. Extensive experiments demonstrate the strong generalization capability of our framework. Our models empowered by the SNR modules significantly outperform the state-of-the-art domain generalization approaches on multiple widely-used person ReID benchmarks, and also show superiority on unsupervised domain adaptation.

Results

TaskDatasetMetricValueModel
Domain AdaptationMarket to CUHK03R117.1SNR
Domain AdaptationMarket to CUHK03mAP17.5SNR
Domain AdaptationMarket to DukemAP58.1SNR
Domain AdaptationMarket to Dukerank-176.3SNR
Domain AdaptationCUHK03 to MSMTR122SNR
Domain AdaptationCUHK03 to MSMTmAP7.7SNR
Domain AdaptationCUHK03 to MarketR177.8SNR
Domain AdaptationCUHK03 to MarketmAP52.4SNR
Domain AdaptationDuke to MarketmAP61.7SNR
Domain AdaptationDuke to Marketrank-182.8SNR
Unsupervised Domain AdaptationMarket to CUHK03R117.1SNR
Unsupervised Domain AdaptationMarket to CUHK03mAP17.5SNR
Unsupervised Domain AdaptationMarket to DukemAP58.1SNR
Unsupervised Domain AdaptationMarket to Dukerank-176.3SNR
Unsupervised Domain AdaptationCUHK03 to MSMTR122SNR
Unsupervised Domain AdaptationCUHK03 to MSMTmAP7.7SNR
Unsupervised Domain AdaptationCUHK03 to MarketR177.8SNR
Unsupervised Domain AdaptationCUHK03 to MarketmAP52.4SNR
Unsupervised Domain AdaptationDuke to MarketmAP61.7SNR
Unsupervised Domain AdaptationDuke to Marketrank-182.8SNR

Related Papers

CSD-VAR: Content-Style Decomposition in Visual Autoregressive Models2025-07-18Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17Weakly Supervised Visible-Infrared Person Re-Identification via Heterogeneous Expert Collaborative Consistency Learning2025-07-17WhoFi: Deep Person Re-Identification via Wi-Fi Channel Signal Encoding2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16