On adversarial training and the 1 Nearest Neighbor classifier
- URL: http://arxiv.org/abs/2404.06313v3
- Date: Sun, 28 Jul 2024 15:08:48 GMT
- Title: On adversarial training and the 1 Nearest Neighbor classifier
- Authors: Amir Hagai, Yair Weiss,
- Abstract summary: We compare the performance of adversarial training to that of the simple 1 Nearest Neighbor (1NN) classifier.
Experiments with 135 different binary image classification problems taken from CIFAR10, MNIST and Fashion-MNIST.
We find that 1NN outperforms almost all of them in terms of robustness to perturbations that are only slightly different from those used during training.
- Score: 8.248839892711478
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to fool deep learning classifiers with tiny perturbations of the input has lead to the development of adversarial training in which the loss with respect to adversarial examples is minimized in addition to the training examples. While adversarial training improves the robustness of the learned classifiers, the procedure is computationally expensive, sensitive to hyperparameters and may still leave the classifier vulnerable to other types of small perturbations. In this paper we compare the performance of adversarial training to that of the simple 1 Nearest Neighbor (1NN) classifier. We prove that under reasonable assumptions, the 1NN classifier will be robust to {\em any} small image perturbation of the training images. In experiments with 135 different binary image classification problems taken from CIFAR10, MNIST and Fashion-MNIST we find that 1NN outperforms TRADES (a powerful adversarial training algorithm) in terms of average adversarial accuracy. In additional experiments with 69 robust models taken from the current adversarial robustness leaderboard, we find that 1NN outperforms almost all of them in terms of robustness to perturbations that are only slightly different from those used during training. Taken together, our results suggest that modern adversarial training methods still fall short of the robustness of the simple 1NN classifier. our code can be found at \url{https://github.com/amirhagai/On-Adversarial-Training-And-The-1-Nearest-Neighbor-Classifier} \keywords{Adversarial training}
Related papers
- Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders [101.42201747763178]
Unlearnable examples (UEs) seek to maximize testing error by making subtle modifications to training examples that are correctly labeled.
Our work provides a novel disentanglement mechanism to build an efficient pre-training purification method.
arXiv Detail & Related papers (2024-05-02T16:49:25Z) - Post-Training Overfitting Mitigation in DNN Classifiers [31.513866929577336]
We show that post-training MM-based regularization substantially mitigates non-malicious overfitting due to class imbalances and overtraining.
Unlike adversarial training, which provides some resilience against attacks but which harms clean (attack-free) generalization, we demonstrate an approach originating from adversarial learning.
arXiv Detail & Related papers (2023-09-28T20:16:24Z) - Two Heads are Better than One: Robust Learning Meets Multi-branch Models [14.72099568017039]
We propose Branch Orthogonality adveRsarial Training (BORT) to obtain state-of-the-art performance with solely the original dataset for adversarial training.
We evaluate our approach on CIFAR-10, CIFAR-100, and SVHN against ell_infty norm-bounded perturbations of size epsilon = 8/255, respectively.
arXiv Detail & Related papers (2022-08-17T05:42:59Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Rethinking Nearest Neighbors for Visual Classification [56.00783095670361]
k-NN is a lazy learning method that aggregates the distance between the test image and top-k neighbors in a training set.
We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps.
Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration.
arXiv Detail & Related papers (2021-12-15T20:15:01Z) - KNN-BERT: Fine-Tuning Pre-Trained Models with KNN Classifier [61.063988689601416]
Pre-trained models are widely used in fine-tuning downstream tasks with linear classifiers optimized by the cross-entropy loss.
These problems can be improved by learning representations that focus on similarities in the same class and contradictions when making predictions.
We introduce the KNearest Neighbors in pre-trained model fine-tuning tasks in this paper.
arXiv Detail & Related papers (2021-10-06T06:17:05Z) - KATANA: Simple Post-Training Robustness Using Test Time Augmentations [49.28906786793494]
A leading defense against such attacks is adversarial training, a technique in which a DNN is trained to be robust to adversarial attacks.
We propose a new simple and easy-to-use technique, KATANA, for robustifying an existing pretrained DNN without modifying its weights.
Our strategy achieves state-of-the-art adversarial robustness on diverse attacks with minimal compromise on the natural images' classification.
arXiv Detail & Related papers (2021-09-16T19:16:00Z) - REGroup: Rank-aggregating Ensemble of Generative Classifiers for Robust
Predictions [6.0162772063289784]
Defense strategies that adopt adversarial training or random input transformations typically require retraining or fine-tuning the model to achieve reasonable performance.
We find that we can learn a generative classifier by statistically characterizing the neural response of an intermediate layer to clean training samples.
Our proposed approach uses a subset of the clean training data and a pre-trained model, and yet is agnostic to network architectures or the adversarial attack generation method.
arXiv Detail & Related papers (2020-06-18T17:07:19Z) - Overfitting in adversarially robust deep learning [86.11788847990783]
We show that overfitting to the training set does in fact harm robust performance to a very large degree in adversarially robust training.
We also show that effects such as the double descent curve do still occur in adversarially trained models, yet fail to explain the observed overfitting.
arXiv Detail & Related papers (2020-02-26T15:40:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.