TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Domain Generalization using Pretrained Models without Fine...

Domain Generalization using Pretrained Models without Fine-tuning

Ziyue Li, Kan Ren, Xinyang Jiang, Bo Li, Haipeng Zhang, Dongsheng Li

2022-03-09Ensemble LearningDomain Generalization
PaperPDF

Abstract

Fine-tuning pretrained models is a common practice in domain generalization (DG) tasks. However, fine-tuning is usually computationally expensive due to the ever-growing size of pretrained models. More importantly, it may cause over-fitting on source domain and compromise their generalization ability as shown in recent works. Generally, pretrained models possess some level of generalization ability and can achieve decent performance regarding specific domains and samples. However, the generalization performance of pretrained models could vary significantly over different test domains even samples, which raises challenges for us to best leverage pretrained models in DG tasks. In this paper, we propose a novel domain generalization paradigm to better leverage various pretrained models, named specialized ensemble learning for domain generalization (SEDGE). It first trains a linear label space adapter upon fixed pretrained models, which transforms the outputs of the pretrained model to the label space of the target domain. Then, an ensemble network aware of model specialty is proposed to dynamically dispatch proper pretrained models to predict each test sample. Experimental studies on several benchmarks show that SEDGE achieves significant performance improvements comparing to strong baselines including state-of-the-art method in DG tasks and reduces the trainable parameters by ~99% and the training time by ~99.5%.

Results

TaskDatasetMetricValueModel
Domain AdaptationPACSAverage Accuracy96.1SEDGE+
Domain AdaptationPACSAverage Accuracy84.1SEDGE
Domain AdaptationOffice-HomeAverage Accuracy80.7SEDGE+
Domain AdaptationOffice-HomeAverage Accuracy79.9SEDGE
Domain AdaptationDomainNetAverage Accuracy54.7SEDGE+
Domain AdaptationDomainNetAverage Accuracy46.3SEDGE
Domain AdaptationVLCSAverage Accuracy82.2SEDGE+
Domain AdaptationVLCSAverage Accuracy79.8SEDGE
Domain AdaptationTerraIncognitaAverage Accuracy56.8SEDGE+
Domain AdaptationTerraIncognitaAverage Accuracy56.8SEDGE
Domain GeneralizationPACSAverage Accuracy96.1SEDGE+
Domain GeneralizationPACSAverage Accuracy84.1SEDGE
Domain GeneralizationOffice-HomeAverage Accuracy80.7SEDGE+
Domain GeneralizationOffice-HomeAverage Accuracy79.9SEDGE
Domain GeneralizationDomainNetAverage Accuracy54.7SEDGE+
Domain GeneralizationDomainNetAverage Accuracy46.3SEDGE
Domain GeneralizationVLCSAverage Accuracy82.2SEDGE+
Domain GeneralizationVLCSAverage Accuracy79.8SEDGE
Domain GeneralizationTerraIncognitaAverage Accuracy56.8SEDGE+
Domain GeneralizationTerraIncognitaAverage Accuracy56.8SEDGE

Related Papers

Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16From Physics to Foundation Models: A Review of AI-Driven Quantitative Remote Sensing Inversion2025-07-11Feed-Forward SceneDINO for Unsupervised Semantic Scene Completion2025-07-08Prompt-Free Conditional Diffusion for Multi-object Image Augmentation2025-07-08Integrated Structural Prompt Learning for Vision-Language Models2025-07-08