TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Domain Generalization Using a Mixture of Multiple Latent D...

Domain Generalization Using a Mixture of Multiple Latent Domains

Toshihiko Matsuura, Tatsuya Harada

2019-11-18Domain GeneralizationClustering
PaperPDFCode(official)Code

Abstract

When domains, which represent underlying data distributions, vary during training and testing processes, deep neural networks suffer a drop in their performance. Domain generalization allows improvements in the generalization performance for unseen target domains by using multiple source domains. Conventional methods assume that the domain to which each sample belongs is known in training. However, many datasets, such as those collected via web crawling, contain a mixture of multiple latent domains, in which the domain of each sample is unknown. This paper introduces domain generalization using a mixture of multiple latent domains as a novel and more realistic scenario, where we try to train a domain-generalized model without using domain labels. To address this scenario, we propose a method that iteratively divides samples into latent domains via clustering, and which trains the domain-invariant feature extractor shared among the divided latent domains via adversarial learning. We assume that the latent domain of images is reflected in their style, and thus, utilize style features for clustering. By using these features, our proposed method successfully discovers latent domains and achieves domain generalization even if the domain labels are not given. Experiments show that our proposed method can train a domain-generalized model without using domain labels. Moreover, it outperforms conventional domain generalization methods, including those that utilize domain labels.

Results

TaskDatasetMetricValueModel
Domain AdaptationPACSAverage Accuracy81.83MMLD (Resnet-18, k=2)
Domain AdaptationPACSAverage Accuracy74.38MMLD (Alexnet, k=3)
Domain GeneralizationPACSAverage Accuracy81.83MMLD (Resnet-18, k=2)
Domain GeneralizationPACSAverage Accuracy74.38MMLD (Alexnet, k=3)

Related Papers

Tri-Learn Graph Fusion Network for Attributed Graph Clustering2025-07-18Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16Ranking Vectors Clustering: Theory and Applications2025-07-16From Physics to Foundation Models: A Review of AI-Driven Quantitative Remote Sensing Inversion2025-07-11Car Object Counting and Position Estimation via Extension of the CLIP-EBC Framework2025-07-11