TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Improving out-of-distribution generalization via multi-tas...

Improving out-of-distribution generalization via multi-task self-supervised pretraining

Isabela Albuquerque, Nikhil Naik, Junnan Li, Nitish Keskar, Richard Socher

2020-03-30Adversarial RobustnessFew-Shot LearningSelf-Supervised LearningDomain GeneralizationMulti-Task LearningOut-of-Distribution Generalization
PaperPDF

Abstract

Self-supervised feature representations have been shown to be useful for supervised classification, few-shot learning, and adversarial robustness. We show that features obtained using self-supervised learning are comparable to, or better than, supervised learning for domain generalization in computer vision. We introduce a new self-supervised pretext task of predicting responses to Gabor filter banks and demonstrate that multi-task learning of compatible pretext tasks improves domain generalization performance as compared to training individual tasks alone. Features learnt through self-supervision obtain better generalization to unseen domains when compared to their supervised counterpart when there is a larger domain shift between training and test distributions and even show better localization ability for objects of interest. Self-supervised feature representations can also be combined with other domain generalization methods to further boost performance.

Results

TaskDatasetMetricValueModel
Domain AdaptationPACSAverage Accuracy69.32Rotation+Gabor+DeepCluster (Alexnet)
Domain GeneralizationPACSAverage Accuracy69.32Rotation+Gabor+DeepCluster (Alexnet)

Related Papers

GLAD: Generalizable Tuning for Vision-Language Models2025-07-17A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys2025-07-17Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16Robust-Multi-Task Gradient Boosting2025-07-15Bridging Robustness and Generalization Against Word Substitution Attacks in NLP via the Growth Bound Matrix Approach2025-07-14