TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SelfReg: Self-supervised Contrastive Regularization for Do...

SelfReg: Self-supervised Contrastive Regularization for Domain Generalization

Daehee Kim, Seunghyun Park, Jinkyu Kim, Jaekoo Lee

2021-04-20ICCV 2021 10Image to sketch recognitionDomain GeneralizationContrastive LearningSingle-Source Domain Generalization
PaperPDFCodeCode(official)

Abstract

In general, an experimental environment for deep learning assumes that the training and the test dataset are sampled from the same distribution. However, in real-world situations, a difference in the distribution between two datasets, domain shift, may occur, which becomes a major factor impeding the generalization performance of the model. The research field to solve this problem is called domain generalization, and it alleviates the domain shift problem by extracting domain-invariant features explicitly or implicitly. In recent studies, contrastive learning-based domain generalization approaches have been proposed and achieved high performance. These approaches require sampling of the negative data pair. However, the performance of contrastive learning fundamentally depends on quality and quantity of negative data pairs. To address this issue, we propose a new regularization method for domain generalization based on contrastive learning, self-supervised contrastive regularization (SelfReg). The proposed approach use only positive data pairs, thus it resolves various problems caused by negative pair sampling. Moreover, we propose a class-specific domain perturbation layer (CDPL), which makes it possible to effectively apply mixup augmentation even when only positive data pairs are used. The experimental results show that the techniques incorporated by SelfReg contributed to the performance in a compatible manner. In the recent benchmark, DomainBed, the proposed method shows comparable performance to the conventional state-of-the-art alternatives. Codes are available at https://github.com/dnap512/SelfReg.

Results

TaskDatasetMetricValueModel
SketchPACSAccuracy33.71SelfReg (ResNet18)
Domain AdaptationPACSAverage Accuracy83.62SelfReg (Resnet-50)
Domain AdaptationPACSAccuracy59.59SelfReg (ResNet18)
Domain GeneralizationPACSAverage Accuracy83.62SelfReg (Resnet-50)
Domain GeneralizationPACSAccuracy59.59SelfReg (ResNet18)
Sketch RecognitionPACSAccuracy33.71SelfReg (ResNet18)
Single-Source Domain GeneralizationPACSAccuracy59.59SelfReg (ResNet18)

Related Papers

Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16