Deep Adversarially-Enhanced k-Nearest Neighbors
- URL: http://arxiv.org/abs/2108.06797v1
- Date: Sun, 15 Aug 2021 19:18:53 GMT
- Title: Deep Adversarially-Enhanced k-Nearest Neighbors
- Authors: Ren Wang, Tianqi Chen
- Abstract summary: We propose a Deep Adversarially-Enhanced k-Nearest Neighbors (DAEkNN) method which achieves higher robustness than DkNN.
We find that DAEkNN improves both the robustness and the robustness-accuracy trade-off on MNIST and CIFAR-10 datasets.
- Score: 16.68075044326343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have theoretically and empirically shown that deep neural
networks (DNNs) have an inherent vulnerability to small perturbations. Applying
the Deep k-Nearest Neighbors (DkNN) classifier, we observe a dramatically
increasing robustness-accuracy trade-off as the layer goes deeper. In this
work, we propose a Deep Adversarially-Enhanced k-Nearest Neighbors (DAEkNN)
method which achieves higher robustness than DkNN and mitigates the
robustness-accuracy trade-off in deep layers through two key elements. First,
DAEkNN is based on an adversarially trained model. Second, DAEkNN makes
predictions by leveraging a weighted combination of benign and adversarial
training data. Empirically, we find that DAEkNN improves both the robustness
and the robustness-accuracy trade-off on MNIST and CIFAR-10 datasets.
Related papers
- Relationship between Uncertainty in DNNs and Adversarial Attacks [0.0]
Deep Neural Networks (DNNs) have achieved state of the art results and even outperformed human accuracy in many challenging tasks.
DNNs are accompanied by uncertainty about their results, causing them to predict an outcome that is either incorrect or outside of a certain level of confidence.
arXiv Detail & Related papers (2024-09-20T05:38:38Z) - RSC-SNN: Exploring the Trade-off Between Adversarial Robustness and Accuracy in Spiking Neural Networks via Randomized Smoothing Coding [17.342181435229573]
Spiking Neural Networks (SNNs) have received widespread attention due to their unique neuronal dynamics and low-power nature.
Previous research empirically shows that SNNs with Poisson coding are more robust than Artificial Neural Networks (ANNs) on small-scale datasets.
This work theoretically demonstrates that SNN's inherent adversarial robustness stems from its Poisson coding.
arXiv Detail & Related papers (2024-07-29T15:26:15Z) - Towards Robust k-Nearest-Neighbor Machine Translation [72.9252395037097]
k-Nearest-Neighbor Machine Translation (kNN-MT) becomes an important research direction of NMT in recent years.
Its main idea is to retrieve useful key-value pairs from an additional datastore to modify translations without updating the NMT model.
The underlying retrieved noisy pairs will dramatically deteriorate the model performance.
We propose a confidence-enhanced kNN-MT model with robust training to alleviate the impact of noise.
arXiv Detail & Related papers (2022-10-17T07:43:39Z) - Hardening DNNs against Transfer Attacks during Network Compression using
Greedy Adversarial Pruning [0.1529342790344802]
We investigate the adversarial robustness of models produced by several irregular pruning schemes and by 8-bit quantization.
We find that this pruning method results in models that are resistant to transfer attacks from their uncompressed counterparts.
arXiv Detail & Related papers (2022-06-15T09:13:35Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - An Integrated Approach to Produce Robust Models with High Efficiency [9.476463361600828]
Quantization and structure simplification are promising ways to adapt Deep Neural Networks (DNNs) to mobile devices.
In this work, we try to obtain both features by applying a convergent relaxation quantization algorithm, Binary-Relax (BR), to a robust adversarial-trained model, ResNets Ensemble.
We design a trade-off loss function that helps DNNs preserve their natural accuracy and improve the channel sparsity.
arXiv Detail & Related papers (2020-08-31T00:44:59Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.