Download PDFOpen PDF in browserA Quantitative Comparison of Image Classification models under Adversarial attacks and defensesEasyChair Preprint 59465 pages•Date: June 28, 2021AbstractIn this paper, we present a comparison of the performance of two state-of-the-art model architectures under Adversarial attacks. These are attacks that are designed to trick trained machine learning models. The models compared in this paper perform commendably on the popular image classification dataset CIFAR-10. To generate these adversarial examples for the attack, we are using two strategies, the first one being a very popular attack based on the L∞ metric. And the other one is a relatively new technique that covers fundamentally different types of adversarial examples generated using the Wasserstein distance. We will also be applying two adversarial defenses, preprocessing the input and adversarial training. The comparative results show that even these new state-of-the-art techniques are susceptible to adversarial attacks. Also, we concluded that more studies on adversarial defenses are required and current defense techniques must be adopted in real-world applications. Keyphrases: Adversarial, Adversarial Defences, Feature Squeezing, Lp-norm, Median Filter, Vision Transformer, Wasserstein, Wide ResNet, adversarial attacks, computer vision
|