TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Mitigate Domain Shift by Primary-Auxiliary Objectives Asso...

Mitigate Domain Shift by Primary-Auxiliary Objectives Association for Generalizing Person ReID

Qilei Li, Shaogang Gong

2023-10-24Domain GeneralizationUnsupervised Domain AdaptationSaliency Detection
PaperPDF

Abstract

While deep learning has significantly improved ReID model accuracy under the independent and identical distribution (IID) assumption, it has also become clear that such models degrade notably when applied to an unseen novel domain due to unpredictable/unknown domain shift. Contemporary domain generalization (DG) ReID models struggle in learning domain-invariant representation solely through training on an instance classification objective. We consider that a deep learning model is heavily influenced and therefore biased towards domain-specific characteristics, e.g., background clutter, scale and viewpoint variations, limiting the generalizability of the learned model, and hypothesize that the pedestrians are domain invariant owning they share the same structural characteristics. To enable the ReID model to be less domain-specific from these pure pedestrians, we introduce a method that guides model learning of the primary ReID instance classification objective by a concurrent auxiliary learning objective on weakly labeled pedestrian saliency detection. To solve the problem of conflicting optimization criteria in the model parameter space between the two learning objectives, we introduce a Primary-Auxiliary Objectives Association (PAOA) mechanism to calibrate the loss gradients of the auxiliary task towards the primary learning task gradients. Benefiting from the harmonious multitask learning design, our model can be extended with the recent test-time diagram to form the PAOA+, which performs on-the-fly optimization against the auxiliary objective in order to maximize the model's generative capacity in the test target domain. Experiments demonstrate the superiority of the proposed PAOA model.

Results

TaskDatasetMetricValueModel
Domain AdaptationMarket to CUHK03R150.9PAOA+
Domain AdaptationMarket to CUHK03mAP50.3PAOA+
Domain AdaptationCUHK03 to MSMTR152.8PAOA+
Domain AdaptationCUHK03 to MSMTmAP26PAOA+
Domain AdaptationCUHK03 to MarketR191.4PAOA+
Domain AdaptationCUHK03 to MarketmAP77.9PAOA+
Unsupervised Domain AdaptationMarket to CUHK03R150.9PAOA+
Unsupervised Domain AdaptationMarket to CUHK03mAP50.3PAOA+
Unsupervised Domain AdaptationCUHK03 to MSMTR152.8PAOA+
Unsupervised Domain AdaptationCUHK03 to MSMTmAP26PAOA+
Unsupervised Domain AdaptationCUHK03 to MarketR191.4PAOA+
Unsupervised Domain AdaptationCUHK03 to MarketmAP77.9PAOA+

Related Papers

Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16From Physics to Foundation Models: A Review of AI-Driven Quantitative Remote Sensing Inversion2025-07-11Feed-Forward SceneDINO for Unsupervised Semantic Scene Completion2025-07-08Prompt-Free Conditional Diffusion for Multi-object Image Augmentation2025-07-08Integrated Structural Prompt Learning for Vision-Language Models2025-07-08