ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense
- URL: http://arxiv.org/abs/2106.14300v1
- Date: Sun, 27 Jun 2021 17:58:59 GMT
- Title: ASK: Adversarial Soft k-Nearest Neighbor Attack and Defense
- Authors: Ren Wang, Tianqi Chen, Philip Yao, Sijia Liu, Indika Rajapakse, Alfred
Hero
- Abstract summary: K-Nearest Neighbor (kNN)-based deep learning methods have been applied to many applications due to their simplicity and geometric interpretability.
We propose an Adrial Soft kNN loss to both design more effective kNN attack strategies and to develop better defenses against them.
- Score: 25.066976298046043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: K-Nearest Neighbor (kNN)-based deep learning methods have been applied to
many applications due to their simplicity and geometric interpretability.
However, the robustness of kNN-based classification models has not been
thoroughly explored and kNN attack strategies are underdeveloped. In this
paper, we propose an Adversarial Soft kNN (ASK) loss to both design more
effective kNN attack strategies and to develop better defenses against them.
Our ASK loss approach has two advantages. First, ASK loss can better
approximate the kNN's probability of classification error than objectives
proposed in previous works. Second, the ASK loss is interpretable: it preserves
the mutual information between the perturbed input and the kNN of the
unperturbed input. We use the ASK loss to generate a novel attack method called
the ASK-Attack (ASK-Atk), which shows superior attack efficiency and accuracy
degradation relative to previous kNN attacks. Based on the ASK-Atk, we then
derive an ASK-Defense (ASK-Def) method that optimizes the worst-case training
loss induced by ASK-Atk.
Related papers
- Not So Robust After All: Evaluating the Robustness of Deep Neural
Networks to Unseen Adversarial Attacks [5.024667090792856]
Deep neural networks (DNNs) have gained prominence in various applications, such as classification, recognition, and prediction.
A fundamental attribute of traditional DNNs is their vulnerability to modifications in input data, which has resulted in the investigation of adversarial attacks.
This study aims to challenge the efficacy and generalization of contemporary defense mechanisms against adversarial attacks.
arXiv Detail & Related papers (2023-08-12T05:21:34Z) - Dynamics-Aware Loss for Learning with Label Noise [73.75129479936302]
Label noise poses a serious threat to deep neural networks (DNNs)
We propose a dynamics-aware loss (DAL) to solve this problem.
Both the detailed theoretical analyses and extensive experimental results demonstrate the superiority of our method.
arXiv Detail & Related papers (2023-03-21T03:05:21Z) - Security-Aware Approximate Spiking Neural Networks [0.0]
We analyze the robustness of AxSNNs with different structural parameters and approximation levels under two-based gradient and two neuromorphic attacks.
We propose two novel defense methods, i.e., precision scaling and approximate quantization-aware filtering, for securing AxSNNs.
Our results demonstrate that AxSNNs are more prone to adversarial attacks than AccSNNs, but precision scaling and AQF significantly improve the robustness of AxSNNs.
arXiv Detail & Related papers (2023-01-12T19:23:15Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Is Approximation Universally Defensive Against Adversarial Attacks in
Deep Neural Networks? [0.0]
We present an adversarial analysis of different approximate DNN accelerators (AxDNNs) using the state-of-the-art approximate multipliers.
Our results demonstrate that adversarial attacks on AxDNNs can cause 53% accuracy loss whereas the same attack may lead to almost no accuracy loss.
arXiv Detail & Related papers (2021-12-02T19:01:36Z) - Deep Adversarially-Enhanced k-Nearest Neighbors [16.68075044326343]
We propose a Deep Adversarially-Enhanced k-Nearest Neighbors (DAEkNN) method which achieves higher robustness than DkNN.
We find that DAEkNN improves both the robustness and the robustness-accuracy trade-off on MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2021-08-15T19:18:53Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Minimum-Norm Adversarial Examples on KNN and KNN-Based Models [7.4297019016687535]
We propose a gradient-based attack on kNN and kNN-based defenses.
We demonstrate that our attack outperforms their method on all of the models we tested.
We hope that this attack can be used as a new baseline for evaluating the robustness of kNN and its variants.
arXiv Detail & Related papers (2020-03-14T05:36:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.