TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/An integrated Auto Encoder-Block Switching defense approac...

An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacks

Anirudh Yadav, Ashutosh Upadhyay, S. Sharanya

2022-03-11Adversarial AttackBIG-bench Machine LearningSelf-Driving Cars
PaperPDFCode(official)Code

Abstract

According to recent studies, the vulnerability of state-of-the-art Neural Networks to adversarial input samples has increased drastically. A neural network is an intermediate path or technique by which a computer learns to perform tasks using Machine learning algorithms. Machine Learning and Artificial Intelligence model has become a fundamental aspect of life, such as self-driving cars [1], smart home devices, so any vulnerability is a significant concern. The smallest input deviations can fool these extremely literal systems and deceive their users as well as administrator into precarious situations. This article proposes a defense algorithm that utilizes the combination of an auto-encoder [3] and block-switching architecture. Auto-coder is intended to remove any perturbations found in input images whereas the block switching method is used to make it more robust against White-box attacks. The attack is planned using FGSM [9] model, and the subsequent counter-attack by the proposed architecture will take place thereby demonstrating the feasibility and security delivered by the algorithm.

Results

TaskDatasetMetricValueModel
Adversarial DefenseminiImageNetAccuracy 88.54Auto Encoder-Block Switching defense with GradCAM

Related Papers

3DGAA: Realistic and Robust 3D Gaussian-based Adversarial Attack for Autonomous Driving2025-07-14VIP: Visual Information Protection through Adversarial Attacks on Vision-Language Models2025-07-11Identifying the Smallest Adversarial Load Perturbations that Render DC-OPF Infeasible2025-07-10ScoreAdv: Score-based Targeted Generation of Natural Adversarial Examples via Diffusion Models2025-07-083D Gaussian Splatting Driven Multi-View Robust Physical Adversarial Camouflage Generation2025-07-02Robustness of Misinformation Classification Systems to Adversarial Examples Through BeamAttack2025-06-30Poster: Enhancing GNN Robustness for Network Intrusion Detection via Agent-based Analysis2025-06-25DRO-Augment Framework: Robustness by Synergizing Wasserstein Distributionally Robust Optimization and Data Augmentation2025-06-22