TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Defense-GAN: Protecting Classifiers Against Adversarial At...

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

Pouya Samangouei, Maya Kabkab, Rama Chellappa

2018-05-17ICLR 2018 1Adversarial Defense against FGSM AttackAdversarial DefenseGeneral Classification
PaperPDFCodeCodeCode(official)CodeCode

Abstract

In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images. At inference time, it finds a close output to a given image which does not contain the adversarial changes. This output is then fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies. Our code has been made publicly available at https://github.com/kabkabm/defensegan

Results

TaskDatasetMetricValueModel
Adversarial DefenseMNISTAccuracy0.8529Defense GAN
Adversarial DefenseMNISTInference speed14.8Defense GAN

Related Papers

Bridging Robustness and Generalization Against Word Substitution Attacks in NLP via the Growth Bound Matrix Approach2025-07-14Active Adversarial Noise Suppression for Image Forgery Localization2025-06-15Sylva: Tailoring Personalized Adversarial Defense in Pre-trained Models via Collaborative Fine-tuning2025-06-04Towards Effective and Efficient Adversarial Defense with Diffusion Models for Robust Visual Tracking2025-05-31Adversarially Robust AI-Generated Image Detection for Free: An Information Theoretic Perspective2025-05-28Are classical deep neural networks weakly adversarially robust?2025-05-28A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment2025-05-27EdgeAgentX: A Novel Framework for Agentic AI at the Edge in Military Communication Networks2025-05-24