TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/TransAdapter: Vision Transformer for Feature-Centric Unsup...

TransAdapter: Vision Transformer for Feature-Centric Unsupervised Domain Adaptation

A. Enes Doruk, Erhan Oztop, Hasan F. Ates

2024-12-05Unsupervised Domain AdaptationDomain Adaptation
PaperPDFCode(official)

Abstract

Unsupervised Domain Adaptation (UDA) aims to utilize labeled data from a source domain to solve tasks in an unlabeled target domain, often hindered by significant domain gaps. Traditional CNN-based methods struggle to fully capture complex domain relationships, motivating the shift to vision transformers like the Swin Transformer, which excel in modeling both local and global dependencies. In this work, we propose a novel UDA approach leveraging the Swin Transformer with three key modules. A Graph Domain Discriminator enhances domain alignment by capturing inter-pixel correlations through graph convolutions and entropy-based attention differentiation. An Adaptive Double Attention module combines Windows and Shifted Windows attention with dynamic reweighting to align long-range and local features effectively. Finally, a Cross-Feature Transform modifies Swin Transformer blocks to improve generalization across domains. Extensive benchmarks confirm the state-of-the-art performance of our versatile method, which requires no task-specific alignment modules, establishing its adaptability to diverse applications.

Results

TaskDatasetMetricValueModel
Domain AdaptationDomainNetAccuracy53.7Transadapter
Domain AdaptationVisDA-2017Accuracy91.2TransAdapter
Domain AdaptationVisDA2017Accuracy91.2TransAdapter
Domain AdaptationOffice-HomeAccuracy89.4TransAdapter-B
Unsupervised Domain AdaptationDomainNetAccuracy53.7Transadapter
Unsupervised Domain AdaptationVisDA-2017Accuracy91.2TransAdapter
Unsupervised Domain AdaptationVisDA2017Accuracy91.2TransAdapter
Unsupervised Domain AdaptationOffice-HomeAccuracy89.4TransAdapter-B

Related Papers

A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17Domain Borders Are There to Be Crossed With Federated Few-Shot Adaptation2025-07-14An Offline Mobile Conversational Agent for Mental Health Support: Learning from Emotional Dialogues and Psychological Texts with Student-Centered Evaluation2025-07-11The Bayesian Approach to Continual Learning: An Overview2025-07-11Doodle Your Keypoints: Sketch-Based Few-Shot Keypoint Detection2025-07-10YOLO-APD: Enhancing YOLOv8 for Robust Pedestrian Detection on Complex Road Geometries2025-07-07CORE-ReID V2: Advancing the Domain Adaptation for Object Re-Identification with Optimized Training and Ensemble Fusion2025-07-04Underwater Monocular Metric Depth Estimation: Real-World Benchmarks and Synthetic Fine-Tuning2025-07-02