Minimum-Norm Adversarial Examples on KNN and KNN-Based Models
- URL: http://arxiv.org/abs/2003.06559v1
- Date: Sat, 14 Mar 2020 05:36:33 GMT
- Title: Minimum-Norm Adversarial Examples on KNN and KNN-Based Models
- Authors: Chawin Sitawarin, David Wagner
- Abstract summary: We propose a gradient-based attack on kNN and kNN-based defenses.
We demonstrate that our attack outperforms their method on all of the models we tested.
We hope that this attack can be used as a new baseline for evaluating the robustness of kNN and its variants.
- Score: 7.4297019016687535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the robustness against adversarial examples of kNN classifiers and
classifiers that combine kNN with neural networks. The main difficulty lies in
the fact that finding an optimal attack on kNN is intractable for typical
datasets. In this work, we propose a gradient-based attack on kNN and kNN-based
defenses, inspired by the previous work by Sitawarin & Wagner [1]. We
demonstrate that our attack outperforms their method on all of the models we
tested with only a minimal increase in the computation time. The attack also
beats the state-of-the-art attack [2] on kNN when k > 1 using less than 1% of
its running time. We hope that this attack can be used as a new baseline for
evaluating the robustness of kNN and its variants.
Related papers
- Not So Robust After All: Evaluating the Robustness of Deep Neural
Networks to Unseen Adversarial Attacks [5.024667090792856]
Deep neural networks (DNNs) have gained prominence in various applications, such as classification, recognition, and prediction.
A fundamental attribute of traditional DNNs is their vulnerability to modifications in input data, which has resulted in the investigation of adversarial attacks.
This study aims to challenge the efficacy and generalization of contemporary defense mechanisms against adversarial attacks.
arXiv Detail & Related papers (2023-08-12T05:21:34Z) - Security-Aware Approximate Spiking Neural Networks [0.0]
We analyze the robustness of AxSNNs with different structural parameters and approximation levels under two-based gradient and two neuromorphic attacks.
We propose two novel defense methods, i.e., precision scaling and approximate quantization-aware filtering, for securing AxSNNs.
Our results demonstrate that AxSNNs are more prone to adversarial attacks than AccSNNs, but precision scaling and AQF significantly improve the robustness of AxSNNs.
arXiv Detail & Related papers (2023-01-12T19:23:15Z) - Attacking the Spike: On the Transferability and Security of Spiking
Neural Networks to Adversarial Examples [19.227133993690504]
Spiking neural networks (SNNs) have attracted much attention for their high energy efficiency and for recent advances in their classification performance.
Unlike traditional deep learning approaches, the analysis and study of the robustness of SNNs to adversarial examples remain relatively underdeveloped.
We show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique.
arXiv Detail & Related papers (2022-09-07T17:05:48Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based
Repeated Bit Flip Attack [10.31732879936362]
We present Zero-data Based Repeated bit flip Attack (ZeBRA) that precisely destroys deep neural networks (DNNs)
Our approach makes the adversarial weight attack more fatal to the security of DNNs.
arXiv Detail & Related papers (2021-11-01T16:44:20Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - KNN-BERT: Fine-Tuning Pre-Trained Models with KNN Classifier [61.063988689601416]
Pre-trained models are widely used in fine-tuning downstream tasks with linear classifiers optimized by the cross-entropy loss.
These problems can be improved by learning representations that focus on similarities in the same class and contradictions when making predictions.
We introduce the KNearest Neighbors in pre-trained model fine-tuning tasks in this paper.
arXiv Detail & Related papers (2021-10-06T06:17:05Z) - KATANA: Simple Post-Training Robustness Using Test Time Augmentations [49.28906786793494]
A leading defense against such attacks is adversarial training, a technique in which a DNN is trained to be robust to adversarial attacks.
We propose a new simple and easy-to-use technique, KATANA, for robustifying an existing pretrained DNN without modifying its weights.
Our strategy achieves state-of-the-art adversarial robustness on diverse attacks with minimal compromise on the natural images' classification.
arXiv Detail & Related papers (2021-09-16T19:16:00Z) - ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense [25.066976298046043]
K-Nearest Neighbor (kNN)-based deep learning methods have been applied to many applications due to their simplicity and geometric interpretability.
We propose an Adrial Soft kNN loss to both design more effective kNN attack strategies and to develop better defenses against them.
arXiv Detail & Related papers (2021-06-27T17:58:59Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.