TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Distilling Out-of-Distribution Robustness from Vision-Lang...

Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models

Andy Zhou, Jindong Wang, Yu-Xiong Wang, Haohan Wang

2023-11-02NeurIPS 2023 11Image ClassificationData AugmentationDomain GeneralizationKnowledge Distillation
PaperPDFCode(official)

Abstract

We propose a conceptually simple and lightweight framework for improving the robustness of vision models through the combination of knowledge distillation and data augmentation. We address the conjecture that larger models do not make for better teachers by showing strong gains in out-of-distribution robustness when distilling from pretrained foundation models. Following this finding, we propose Discrete Adversarial Distillation (DAD), which leverages a robust teacher to generate adversarial examples and a VQGAN to discretize them, creating more informative samples than standard data augmentation techniques. We provide a theoretical framework for the use of a robust teacher in the knowledge distillation with data augmentation setting and demonstrate strong gains in out-of-distribution robustness and clean accuracy across different student architectures. Notably, our method adds minor computational overhead compared to similar techniques and can be easily combined with other data augmentations for further improvements.

Results

TaskDatasetMetricValueModel
Domain AdaptationImageNet-RTop-1 Error Rate34.9Discrete Adversarial Distillation (ViT-B,224)
Domain AdaptationImageNet-ATop-1 accuracy %31.8Discrete Adversarial Distillation (ViT-B/224)
Domain AdaptationImageNet-ATop-1 accuracy %7.7Discrete Adversarial Distillation (ResNet-50)
Domain AdaptationImageNet-SketchTop-1 accuracy46.1Discrete Adversarial Distillation (ViT-B, 224)
Image ClassificationImageNet V2Top 1 Accuracy71.7Discrete Adversarial Distillation (ViT-B, 224)
Domain GeneralizationImageNet-RTop-1 Error Rate34.9Discrete Adversarial Distillation (ViT-B,224)
Domain GeneralizationImageNet-ATop-1 accuracy %31.8Discrete Adversarial Distillation (ViT-B/224)
Domain GeneralizationImageNet-ATop-1 accuracy %7.7Discrete Adversarial Distillation (ResNet-50)
Domain GeneralizationImageNet-SketchTop-1 accuracy46.1Discrete Adversarial Distillation (ViT-B, 224)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17