Yu Ding, Lei Wang, Bin Liang, Shuming Liang, Yang Wang, Fang Chen
Deep Neural Networks (DNNs) suffer from domain shift when the test dataset follows a distribution different from the training dataset. Domain generalization aims to tackle this issue by learning a model that can generalize to unseen domains. In this paper, we propose a new approach that aims to explicitly remove domain-specific features for domain generalization. Following this approach, we propose a novel framework called Learning and Removing Domain-specific features for Generalization (LRDG) that learns a domain-invariant model by tactically removing domain-specific features from the input images. Specifically, we design a classifier to effectively learn the domain-specific features for each source domain, respectively. We then develop an encoder-decoder network to map each input image into a new image space where the learned domain-specific features are removed. With the images output by the encoder-decoder network, another classifier is designed to learn the domain-invariant features to conduct image classification. Extensive experiments demonstrate that our framework achieves superior performance compared with state-of-the-art methods.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Domain Adaptation | PACS | Average Accuracy | 88.63 | LRDG (ResNet-50) |
| Domain Adaptation | Office-Home | Average Accuracy | 65.75 | LRDG (ResNet-18) |
| Domain Generalization | PACS | Average Accuracy | 88.63 | LRDG (ResNet-50) |
| Domain Generalization | Office-Home | Average Accuracy | 65.75 | LRDG (ResNet-18) |