Adversarial Attack
6 benchmarks1808 papers
An Adversarial Attack is a technique to find a perturbation that changes the prediction of a machine learning model. The perturbation can be very small and imperceptible to human eyes.
<span class="description-source">Source: Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks </span>