Fengyi Shen, Akhil Gurram, Ahmet Faruk Tuna, Onay Urfalioglu, Alois Knoll
Due to the difficulty of obtaining ground-truth labels, learning from virtual-world datasets is of great interest for real-world applications like semantic segmentation. From domain adaptation perspective, the key challenge is to learn domain-agnostic representation of the inputs in order to benefit from virtual data. In this paper, we propose a novel trident-like architecture that enforces a shared feature encoder to satisfy confrontational source and target constraints simultaneously, thus learning a domain-invariant feature space. Moreover, we also introduce a novel training pipeline enabling self-induced cross-domain data augmentation during the forward pass. This contributes to a further reduction of the domain gap. Combined with a self-training process, we obtain state-of-the-art results on benchmark datasets (e.g. GTA5 or Synthia to Cityscapes adaptation). Code and pre-trained models are available at https://github.com/HMRC-AEL/TridentAdapt
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Image-to-Image Translation | GTAV-to-Cityscapes Labels | mIoU | 53.3 | TridentAdapt |
| Image Generation | GTAV-to-Cityscapes Labels | mIoU | 53.3 | TridentAdapt |
| 1 Image, 2*2 Stitching | GTAV-to-Cityscapes Labels | mIoU | 53.3 | TridentAdapt |