TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Domain-aware Triplet loss in Domain Generalization

Domain-aware Triplet loss in Domain Generalization

Kaiyu Guo, Brian Lovell

2023-03-01Metric LearningDomain GeneralizationObject RecognitionClustering
PaperPDFCode(official)

Abstract

Despite much progress being made in the field of object recognition with the advances of deep learning, there are still several factors negatively affecting the performance of deep learning models. Domain shift is one of these factors and is caused by discrepancies in the distributions of the testing and training data. In this paper, we focus on the problem of compact feature clustering in domain generalization to help optimize the embedding space from multi-domain data. We design a domainaware triplet loss for domain generalization to help the model to not only cluster similar semantic features, but also to disperse features arising from the domain. Unlike previous methods focusing on distribution alignment, our algorithm is designed to disperse domain information in the embedding space. The basic idea is motivated based on the assumption that embedding features can be clustered based on domain information, which is mathematically and empirically supported in this paper. In addition, during our exploration of feature clustering in domain generalization, we note that factors affecting the convergence of metric learning loss in domain generalization are more important than the pre-defined domains. To solve this issue, we utilize two methods to normalize the embedding space, reducing the internal covariate shift of the embedding features. The ablation study demonstrates the effectiveness of our algorithm. Moreover, the experiments on the benchmark datasets, including PACS, VLCS and Office-Home, show that our method outperforms related methods focusing on domain discrepancy. In particular, our results on RegnetY-16 are significantly better than state-of-the-art methods on the benchmark datasets. Our code will be released at https://github.com/workerbcd/DCT

Results

TaskDatasetMetricValueModel
Domain AdaptationPACSAverage Accuracy97.6D-Triplet(RegNetY-16GF)
Domain AdaptationPACSAverage Accuracy87.3D-Triplet(Resnet-50)
Domain AdaptationOffice-HomeAverage Accuracy82.6D-Triplet(RegNetY-16GF)
Domain AdaptationOffice-HomeAverage Accuracy70.3D-Triplet(Resnet-50)
Domain AdaptationVLCSAverage Accuracy82.9D-Triplet(RegNetY-16GF)
Domain AdaptationVLCSAverage Accuracy79.3D-Triplet(Resnet-50)
Domain GeneralizationPACSAverage Accuracy97.6D-Triplet(RegNetY-16GF)
Domain GeneralizationPACSAverage Accuracy87.3D-Triplet(Resnet-50)
Domain GeneralizationOffice-HomeAverage Accuracy82.6D-Triplet(RegNetY-16GF)
Domain GeneralizationOffice-HomeAverage Accuracy70.3D-Triplet(Resnet-50)
Domain GeneralizationVLCSAverage Accuracy82.9D-Triplet(RegNetY-16GF)
Domain GeneralizationVLCSAverage Accuracy79.3D-Triplet(Resnet-50)

Related Papers

Tri-Learn Graph Fusion Network for Attributed Graph Clustering2025-07-18Unsupervised Ground Metric Learning2025-07-17Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16Ranking Vectors Clustering: Theory and Applications2025-07-16