Chen-Yu Lee, Tanmay Batra, Mohammad Haris Baig, Daniel Ulbricht
In this work, we connect two distinct concepts for unsupervised domain adaptation: feature distribution alignment between domains by utilizing the task-specific decision boundary and the Wasserstein metric. Our proposed sliced Wasserstein discrepancy (SWD) is designed to capture the natural notion of dissimilarity between the outputs of task-specific classifiers. It provides a geometrically meaningful guidance to detect target samples that are far from the support of the source and enables efficient distribution alignment in an end-to-end trainable fashion. In the experiments, we validate the effectiveness and genericness of our method on digit and sign recognition, image classification, semantic segmentation, and object detection.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Image-to-Image Translation | SYNTHIA-to-Cityscapes | mIoU (13 classes) | 48.1 | SWD |
| Image-to-Image Translation | GTAV-to-Cityscapes Labels | mIoU | 44.5 | SWD |
| Domain Adaptation | VisDA2017 | Accuracy | 76.4 | SWD |
| Image Generation | SYNTHIA-to-Cityscapes | mIoU (13 classes) | 48.1 | SWD |
| Image Generation | GTAV-to-Cityscapes Labels | mIoU | 44.5 | SWD |
| 1 Image, 2*2 Stitching | SYNTHIA-to-Cityscapes | mIoU (13 classes) | 48.1 | SWD |
| 1 Image, 2*2 Stitching | GTAV-to-Cityscapes Labels | mIoU | 44.5 | SWD |