Variational Randomized Smoothing for Sample-Wise Adversarial Robustness
Ryo Hase, Ye Wang, Toshiaki Koike-Akino, Jing Liu, Kieran Parsons
2024-07-16Adversarial Robustness
Abstract
Randomized smoothing is a defensive technique to achieve enhanced robustness against adversarial examples which are small input perturbations that degrade the performance of neural network models. Conventional randomized smoothing adds random noise with a fixed noise level for every input sample to smooth out adversarial perturbations. This paper proposes a new variational framework that uses a per-sample noise level suitable for each input by introducing a noise level selector. Our experimental results demonstrate enhancement of empirical robustness against adversarial attacks. We also provide and analyze the certified robustness for our sample-wise smoothing method.
Related Papers
Bridging Robustness and Generalization Against Word Substitution Attacks in NLP via the Growth Bound Matrix Approach2025-07-14Tail-aware Adversarial Attacks: A Distributional Approach to Efficient LLM Jailbreaking2025-07-06Evaluating the Evaluators: Trust in Adversarial Robustness Tests2025-07-04Rectifying Adversarial Sample with Low Entropy Prior for Test-Time Defense2025-07-04Is Reasoning All You Need? Probing Bias in the Age of Reasoning Language Models2025-07-03NIC-RobustBench: A Comprehensive Open-Source Toolkit for Neural Image Compression and Robustness Analysis2025-06-23PRISON: Unmasking the Criminal Potential of Large Language Models2025-06-19Intriguing Frequency Interpretation of Adversarial Robustness for CNNs and ViTs2025-06-15