TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Adversarial Defense by Restricting the Hidden Space of Dee...

Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks

Aamir Mustafa, Salman Khan, Munawar Hayat, Roland Goecke, Jianbing Shen, Ling Shao

2019-04-01ICCV 2019 10Adversarial Defense
PaperPDFCode(official)

Abstract

Deep neural networks are vulnerable to adversarial attacks, which can fool them by adding minuscule perturbations to the input images. The robustness of existing defenses suffers greatly under white-box attack settings, where an adversary has full knowledge about the network and can iterate several times to find strong perturbations. We observe that the main reason for the existence of such perturbations is the close proximity of different class samples in the learned feature space. This allows model decisions to be totally changed by adding an imperceptible perturbation in the inputs. To counter this, we propose to class-wise disentangle the intermediate feature representations of deep networks. Specifically, we force the features for each class to lie inside a convex polytope that is maximally separated from the polytopes of other classes. In this manner, the network is forced to learn distinct and distant decision regions for each class. We observe that this simple constraint on the features greatly enhances the robustness of learned models, even against the strongest white-box attacks, without degrading the classification performance on clean images. We report extensive evaluations in both black-box and white-box attack scenarios and show significant gains in comparison to state-of-the art defenses.

Results

TaskDatasetMetricValueModel
Adversarial DefenseCIFAR-10Accuracy46.7PCL (against PGD, white box)

Related Papers

Bridging Robustness and Generalization Against Word Substitution Attacks in NLP via the Growth Bound Matrix Approach2025-07-14Active Adversarial Noise Suppression for Image Forgery Localization2025-06-15Sylva: Tailoring Personalized Adversarial Defense in Pre-trained Models via Collaborative Fine-tuning2025-06-04Towards Effective and Efficient Adversarial Defense with Diffusion Models for Robust Visual Tracking2025-05-31Adversarially Robust AI-Generated Image Detection for Free: An Information Theoretic Perspective2025-05-28Are classical deep neural networks weakly adversarially robust?2025-05-28A Framework for Adversarial Analysis of Decision Support Systems Prior to Deployment2025-05-27EdgeAgentX: A Novel Framework for Agentic AI at the Edge in Military Communication Networks2025-05-24