TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Global-Local Regularization Via Distributional Robustness

Global-Local Regularization Via Distributional Robustness

Hoang Phan, Trung Le, Trung Phung, Tuan Anh Bui, Nhat Ho, Dinh Phung

2022-03-01Adversarial RobustnessDomain GeneralizationSemi-Supervised Image ClassificationDomain Adaptation
PaperPDFCode(official)

Abstract

Despite superior performance in many situations, deep neural networks are often vulnerable to adversarial examples and distribution shifts, limiting model generalization ability in real-world applications. To alleviate these problems, recent approaches leverage distributional robustness optimization (DRO) to find the most challenging distribution, and then minimize loss function over this most challenging distribution. Regardless of achieving some improvements, these DRO approaches have some obvious limitations. First, they purely focus on local regularization to strengthen model robustness, missing a global regularization effect which is useful in many real-world applications (e.g., domain adaptation, domain generalization, and adversarial machine learning). Second, the loss functions in the existing DRO approaches operate in only the most challenging distribution, hence decouple with the original distribution, leading to a restrictive modeling capability. In this paper, we propose a novel regularization technique, following the veins of Wasserstein-based DRO framework. Specifically, we define a particular joint distribution and Wasserstein-based uncertainty, allowing us to couple the original and most challenging distributions for enhancing modeling capability and applying both local and global regularizations. Empirical studies on different learning problems demonstrate that our proposed approach significantly outperforms the existing regularization approaches in various domains: semi-supervised learning, domain adaptation, domain generalization, and adversarial machine learning.

Results

TaskDatasetMetricValueModel
Domain AdaptationOffice-31Average Accuracy87.8GLOT-DR
Domain AdaptationImageCLEF-DAAccuracy90.4GLOT-DR
Domain AdaptationPACSAverage Accuracy73.5GLOT-DR
Domain AdaptationCIFAR-100CAccuracy58.4GLOT-DR
Domain AdaptationCIFAR-10CAccuracy84.5GLOT-DR
Image ClassificationCIFAR-10, 4000 LabelsPercentage error10.6GLOT-DR
Adversarial RobustnessCIFAR-10Accuracy84.13GLOT-DR
Adversarial RobustnessCIFAR-10Attack: AutoAttack49.94GLOT-DR
Semi-Supervised Image ClassificationCIFAR-10, 4000 LabelsPercentage error10.6GLOT-DR
Domain GeneralizationPACSAverage Accuracy73.5GLOT-DR
Domain GeneralizationCIFAR-100CAccuracy58.4GLOT-DR
Domain GeneralizationCIFAR-10CAccuracy84.5GLOT-DR

Related Papers

Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique2025-07-17InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16Bridging Robustness and Generalization Against Word Substitution Attacks in NLP via the Growth Bound Matrix Approach2025-07-14Domain Borders Are There to Be Crossed With Federated Few-Shot Adaptation2025-07-14From Physics to Foundation Models: A Review of AI-Driven Quantitative Remote Sensing Inversion2025-07-11