TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/QT-DoG: Quantization-aware Training for Domain Generalizat...

QT-DoG: Quantization-aware Training for Domain Generalization

Saqib Javed, Hieu Le, Mathieu Salzmann

2024-10-08QuantizationModel CompressionDomain Generalization
PaperPDFCode(official)

Abstract

Domain Generalization (DG) aims to train models that perform well not only on the training (source) domains but also on novel, unseen target data distributions. A key challenge in DG is preventing overfitting to source domains, which can be mitigated by finding flatter minima in the loss landscape. In this work, we propose Quantization-aware Training for Domain Generalization (QT-DoG) and demonstrate that weight quantization effectively leads to flatter minima in the loss landscape, thereby enhancing domain generalization. Unlike traditional quantization methods focused on model compression, QT-DoG exploits quantization as an implicit regularizer by inducing noise in model weights, guiding the optimization process toward flatter minima that are less sensitive to perturbations and overfitting. We provide both theoretical insights and empirical evidence demonstrating that quantization inherently encourages flatter minima, leading to better generalization across domains. Moreover, with the benefit of reducing the model size through quantization, we demonstrate that an ensemble of multiple quantized models further yields superior accuracy than the state-of-the-art DG approaches with no computational or memory overheads. Our extensive experiments demonstrate that QT-DoG generalizes across various datasets, architectures, and quantization algorithms, and can be combined with other DG methods, establishing its versatility and robustness.

Results

TaskDatasetMetricValueModel
Domain AdaptationPACSAverage Accuracy90.7EoQ (ResNet-50)
Domain AdaptationPACSAverage Accuracy87.89QT-DoG (ResNet-50)
Domain AdaptationTerraIncognitaAverage Accuracy53.2EOQ (ResNet-50)
Domain AdaptationTerraIncognitaAverage Accuracy50.8QT-DoG (ResNet-50)
Domain GeneralizationPACSAverage Accuracy90.7EoQ (ResNet-50)
Domain GeneralizationPACSAverage Accuracy87.89QT-DoG (ResNet-50)
Domain GeneralizationTerraIncognitaAverage Accuracy53.2EOQ (ResNet-50)
Domain GeneralizationTerraIncognitaAverage Accuracy50.8QT-DoG (ResNet-50)

Related Papers

Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation2025-09-04LINR-PCGC: Lossless Implicit Neural Representations for Point Cloud Geometry Compression2025-07-21An End-to-End DNN Inference Framework for the SpiNNaker2 Neuromorphic MPSoC2025-07-18Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17Angle Estimation of a Single Source with Massive Uniform Circular Arrays2025-07-17Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling2025-07-17