TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Prompt-based Distribution Alignment for Unsupervised Domai...

Prompt-based Distribution Alignment for Unsupervised Domain Adaptation

Shuanghao Bai, Min Zhang, Wanqi Zhou, Siteng Huang, Zhirong Luan, Donglin Wang, Badong Chen

2023-12-15Prompt EngineeringUnsupervised Domain AdaptationDomain Adaptation
PaperPDFCode(official)

Abstract

Recently, despite the unprecedented success of large pre-trained visual-language models (VLMs) on a wide range of downstream tasks, the real-world unsupervised domain adaptation (UDA) problem is still not well explored. Therefore, in this paper, we first experimentally demonstrate that the unsupervised-trained VLMs can significantly reduce the distribution discrepancy between source and target domains, thereby improving the performance of UDA. However, a major challenge for directly deploying such models on downstream UDA tasks is prompt engineering, which requires aligning the domain knowledge of source and target domains, since the performance of UDA is severely influenced by a good domain-invariant representation. We further propose a Prompt-based Distribution Alignment (PDA) method to incorporate the domain knowledge into prompt learning. Specifically, PDA employs a two-branch prompt-tuning paradigm, namely base branch and alignment branch. The base branch focuses on integrating class-related representation into prompts, ensuring discrimination among different classes. To further minimize domain discrepancy, for the alignment branch, we construct feature banks for both the source and target domains and propose image-guided feature tuning (IFT) to make the input attend to feature banks, which effectively integrates self-enhanced and cross-domain features into the model. In this way, these two branches can be mutually promoted to enhance the adaptation of VLMs for UDA. We conduct extensive experiments on three benchmarks to demonstrate that our proposed PDA achieves state-of-the-art performance. The code is available at https://github.com/BaiShuanghao/Prompt-based-Distribution-Alignment.

Results

TaskDatasetMetricValueModel
Domain AdaptationVisDA2017Accuracy89.7PDA (CLIP, ViT-B/16)
Domain AdaptationVisDA2017Accuracy86.4PDA (CLIP, ResNet-101)
Domain AdaptationOffice-HomeAccuracy85.7PDA (CLIP, ViT-B/16)
Domain AdaptationOffice-HomeAccuracy75.3PDA (CLIP, ResNet-50)
Domain AdaptationOffice-31Accuracy91.2PDA (CLIP, ViT-B/16)
Unsupervised Domain AdaptationVisDA2017Accuracy89.7PDA (CLIP, ViT-B/16)
Unsupervised Domain AdaptationVisDA2017Accuracy86.4PDA (CLIP, ResNet-101)
Unsupervised Domain AdaptationOffice-HomeAccuracy85.7PDA (CLIP, ViT-B/16)
Unsupervised Domain AdaptationOffice-HomeAccuracy75.3PDA (CLIP, ResNet-50)
Unsupervised Domain AdaptationOffice-31Accuracy91.2PDA (CLIP, ViT-B/16)

Related Papers

Leveraging Language Prior for Infrared Small Target Detection2025-07-17Emotional Support with LLM-based Empathetic Dialogue Generation2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17Domain Borders Are There to Be Crossed With Federated Few-Shot Adaptation2025-07-14Prompt Engineering in Segment Anything Model: Methodologies, Applications, and Emerging Challenges2025-07-13An Offline Mobile Conversational Agent for Mental Health Support: Learning from Emotional Dialogues and Psychological Texts with Student-Centered Evaluation2025-07-11The Bayesian Approach to Continual Learning: An Overview2025-07-11Doodle Your Keypoints: Sketch-Based Few-Shot Keypoint Detection2025-07-10