TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Distributionally Robust Neural Networks for Group Shifts: ...

Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization

Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, Percy Liang

2019-11-20Natural Language InferenceDomain GeneralizationStochastic OptimizationOut-of-Distribution Generalization
PaperPDFCodeCodeCodeCodeCodeCode(official)Code(official)Code

Abstract

Overparameterized neural networks can be highly accurate on average on an i.i.d. test set yet consistently fail on atypical groups of the data (e.g., by learning spurious correlations that hold on average but not in such groups). Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups. However, we find that naively applying group DRO to overparameterized neural networks fails: these models can perfectly fit the training data, and any model with vanishing average training loss also already has vanishing worst-case training loss. Instead, the poor worst-case performance arises from poor generalization on some groups. By coupling group DRO models with increased regularization---a stronger-than-typical L2 penalty or early stopping---we achieve substantially higher worst-group accuracies, with 10-40 percentage point improvements on a natural language inference task and two image tasks, while maintaining high average accuracies. Our results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization. Finally, we introduce a stochastic optimization algorithm, with convergence guarantees, to efficiently train group DRO models.

Results

TaskDatasetMetricValueModel
Domain AdaptationPACSAverage Accuracy84.4GroupDRO (Resnet-50, DomainBed)
Domain AdaptationNICO VehicleAccuracy77.61DRO (Resnet-18)
Domain AdaptationNICO AnimalAccuracy77.61DRO (Resnet-18)
Domain GeneralizationPACSAverage Accuracy84.4GroupDRO (Resnet-50, DomainBed)
Domain GeneralizationNICO VehicleAccuracy77.61DRO (Resnet-18)
Domain GeneralizationNICO AnimalAccuracy77.61DRO (Resnet-18)

Related Papers

Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15From Physics to Foundation Models: A Review of AI-Driven Quantitative Remote Sensing Inversion2025-07-11DS@GT at CheckThat! 2025: Evaluating Context and Tokenization Strategies for Numerical Fact Verification2025-07-08Feed-Forward SceneDINO for Unsupervised Semantic Scene Completion2025-07-08