TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Improving the Accuracy-Robustness Trade-Off of Classifiers...

Improving the Accuracy-Robustness Trade-Off of Classifiers via Adaptive Smoothing

Yatong Bai, Brendon G. Anderson, Aerin Kim, Somayeh Sojoudi

2023-01-29Adversarial Robustness
PaperPDFCodeCode(official)

Abstract

While prior research has proposed a plethora of methods that build neural classifiers robust against adversarial robustness, practitioners are still reluctant to adopt them due to their unacceptably severe clean accuracy penalties. This paper significantly alleviates this accuracy-robustness trade-off by mixing the output probabilities of a standard classifier and a robust classifier, where the standard network is optimized for clean accuracy and is not robust in general. We show that the robust base classifier's confidence difference for correct and incorrect examples is the key to this improvement. In addition to providing intuitions and empirical evidence, we theoretically certify the robustness of the mixed classifier under realistic assumptions. Furthermore, we adapt an adversarial input detector into a mixing network that adaptively adjusts the mixture of the two base models, further reducing the accuracy penalty of achieving robustness. The proposed flexible method, termed "adaptive smoothing", can work in conjunction with existing or even future methods that improve clean accuracy, robustness, or adversary detection. Our empirical evaluation considers strong attack methods, including AutoAttack and adaptive attack. On the CIFAR-100 dataset, our method achieves an 85.21% clean accuracy while maintaining a 38.72% $\ell_\infty$-AutoAttacked ($\epsilon = 8/255$) accuracy, becoming the second most robust method on the RobustBench CIFAR-100 benchmark as of submission, while improving the clean accuracy by ten percentage points compared with all listed models. The code that implements our method is available at https://github.com/Bai-YT/AdaptiveSmoothing.

Results

TaskDatasetMetricValueModel
Adversarial RobustnessCIFAR-100AutoAttacked Accuracy38.72Mixed Classifier
Adversarial RobustnessCIFAR-100Clean Accuracy85.21Mixed Classifier
Adversarial RobustnessCIFAR-10Accuracy95.23Mixed classifier
Adversarial RobustnessCIFAR-10Attack: AutoAttack68.06Mixed classifier
Adversarial RobustnessCIFAR-10Robust Accuracy68.06Mixed classifier

Related Papers

Bridging Robustness and Generalization Against Word Substitution Attacks in NLP via the Growth Bound Matrix Approach2025-07-14Tail-aware Adversarial Attacks: A Distributional Approach to Efficient LLM Jailbreaking2025-07-06Evaluating the Evaluators: Trust in Adversarial Robustness Tests2025-07-04Rectifying Adversarial Sample with Low Entropy Prior for Test-Time Defense2025-07-04Is Reasoning All You Need? Probing Bias in the Age of Reasoning Language Models2025-07-03NIC-RobustBench: A Comprehensive Open-Source Toolkit for Neural Image Compression and Robustness Analysis2025-06-23PRISON: Unmasking the Criminal Potential of Large Language Models2025-06-19Intriguing Frequency Interpretation of Adversarial Robustness for CNNs and ViTs2025-06-15