TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Unsupervised Domain Adaptation via Distilled Discriminativ...

Unsupervised Domain Adaptation via Distilled Discriminative Clustering

Hui Tang, YaoWei Wang, Kui Jia

2023-02-23Unsupervised Domain AdaptationnClusteringUnsupervised Domain AdaptationDomain Adaptation
PaperPDFCode(official)

Abstract

Unsupervised domain adaptation addresses the problem of classifying data in an unlabeled target domain, given labeled source domain data that share a common label space but follow a different distribution. Most of the recent methods take the approach of explicitly aligning feature distributions between the two domains. Differently, motivated by the fundamental assumption for domain adaptability, we re-cast the domain adaptation problem as discriminative clustering of target data, given strong privileged information provided by the closely related, labeled source data. Technically, we use clustering objectives based on a robust variant of entropy minimization that adaptively filters target data, a soft Fisher-like criterion, and additionally the cluster ordering via centroid classification. To distill discriminative source information for target clustering, we propose to jointly train the network using parallel, supervised learning objectives over labeled source data. We term our method of distilled discriminative clustering for domain adaptation as DisClusterDA. We also give geometric intuition that illustrates how constituent objectives of DisClusterDA help learn class-wisely pure, compact feature distributions. We conduct careful ablation studies and extensive experiments on five popular benchmark datasets, including a multi-source domain adaptation one. Based on commonly used backbone networks, DisClusterDA outperforms existing methods on these benchmarks. It is also interesting to observe that in our DisClusterDA framework, adding an additional loss term that explicitly learns to align class-level feature distributions across domains does harm to the adaptation performance, though more careful studies in different algorithmic frameworks are to be conducted.

Results

TaskDatasetMetricValueModel
Domain AdaptationVisDA2017Average Accuracy87.1DisClusterDA
Unsupervised Domain AdaptationVisDA2017Average Accuracy87.1DisClusterDA

Related Papers

Tri-Learn Graph Fusion Network for Attributed Graph Clustering2025-07-18A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17Ranking Vectors Clustering: Theory and Applications2025-07-16Domain Borders Are There to Be Crossed With Federated Few-Shot Adaptation2025-07-14Car Object Counting and Position Estimation via Extension of the CLIP-EBC Framework2025-07-11An Offline Mobile Conversational Agent for Mental Health Support: Learning from Emotional Dialogues and Psychological Texts with Student-Centered Evaluation2025-07-11The Bayesian Approach to Continual Learning: An Overview2025-07-11Doodle Your Keypoints: Sketch-Based Few-Shot Keypoint Detection2025-07-10