TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Automated Domain Discovery from Multiple Sources to Improv...

Automated Domain Discovery from Multiple Sources to Improve Zero-Shot Generalization

Kowshik Thopalli, Sameeksha Katoch, Pavan Turaga, Jayaraman J. Thiagarajan

2021-12-17Zero-shot GeneralizationDomain Generalization
PaperPDFCode(official)

Abstract

Domain generalization (DG) methods aim to develop models that generalize to settings where the test distribution is different from the training data. In this paper, we focus on the challenging problem of multi-source zero shot DG (MDG), where labeled training data from multiple source domains is available but with no access to data from the target domain. A wide range of solutions have been proposed for this problem, including the state-of-the-art multi-domain ensembling approaches. Despite these advances, the na\"ive ERM solution of pooling all source data together and training a single classifier is surprisingly effective on standard benchmarks. In this paper, we hypothesize that, it is important to elucidate the link between pre-specified domain labels and MDG performance, in order to explain this behavior. More specifically, we consider two popular classes of MDG algorithms -- distributional robust optimization (DRO) and multi-domain ensembles, in order to demonstrate how inferring custom domain groups can lead to consistent improvements over the original domain labels that come with the dataset. To this end, we propose (i) Group-DRO++, which incorporates an explicit clustering step to identify custom domains in an existing DRO technique; and (ii) DReaME, which produces effective multi-domain ensembles through implicit domain re-labeling with a novel meta-optimization algorithm. Using empirical studies on multiple standard benchmarks, we show that our variants consistently outperform ERM by significant margins (1.5% - 9%), and produce state-of-the-art MDG performance. Our code can be found at https://github.com/kowshikthopalli/DREAME

Results

TaskDatasetMetricValueModel
Domain AdaptationPACSAverage Accuracy87.35DREAME
Domain AdaptationOffice-HomeAverage Accuracy69.76DREAME
Domain AdaptationVLCSAverage Accuracy79.02DREAME
Domain AdaptationTerraIncognitaAverage Accuracy48.66DREAME
Domain GeneralizationPACSAverage Accuracy87.35DREAME
Domain GeneralizationOffice-HomeAverage Accuracy69.76DREAME
Domain GeneralizationVLCSAverage Accuracy79.02DREAME
Domain GeneralizationTerraIncognitaAverage Accuracy48.66DREAME

Related Papers

Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17SAMST: A Transformer framework based on SAM pseudo label filtering for remote sensing semi-supervised semantic segmentation2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation2025-07-15PoseLLM: Enhancing Language-Guided Human Pose Estimation with MLP Alignment2025-07-12From Physics to Foundation Models: A Review of AI-Driven Quantitative Remote Sensing Inversion2025-07-11