Imperceptible Adversarial Attack on Deep Neural Networks from Image
Boundary
- URL: http://arxiv.org/abs/2308.15344v1
- Date: Tue, 29 Aug 2023 14:41:05 GMT
- Title: Imperceptible Adversarial Attack on Deep Neural Networks from Image
Boundary
- Authors: Fahad Alrasheedi, Xin Zhong
- Abstract summary: Adversarial Examples (AEs) can easily fool Deep Neural Networks (DNNs)
This study proposes an imperceptible adversarial attack that systemically attacks the input image boundary for finding the AEs.
- Score: 1.6589012298747952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although Deep Neural Networks (DNNs), such as the convolutional neural
networks (CNN) and Vision Transformers (ViTs), have been successfully applied
in the field of computer vision, they are demonstrated to be vulnerable to
well-sought Adversarial Examples (AEs) that can easily fool the DNNs. The
research in AEs has been active, and many adversarial attacks and explanations
have been proposed since they were discovered in 2014. The mystery of the AE's
existence is still an open question, and many studies suggest that DNN training
algorithms have blind spots. The salient objects usually do not overlap with
boundaries; hence, the boundaries are not the DNN model's attention.
Nevertheless, recent studies show that the boundaries can dominate the behavior
of the DNN models. Hence, this study aims to look at the AEs from a different
perspective and proposes an imperceptible adversarial attack that systemically
attacks the input image boundary for finding the AEs. The experimental results
have shown that the proposed boundary attacking method effectively attacks six
CNN models and the ViT using only 32% of the input image content (from the
boundaries) with an average success rate (SR) of 95.2% and an average peak
signal-to-noise ratio of 41.37 dB. Correlation analyses are conducted,
including the relation between the adversarial boundary's width and the SR and
how the adversarial boundary changes the DNN model's attention. This paper's
discoveries can potentially advance the understanding of AEs and provide a
different perspective on how AEs can be constructed.
Related papers
- A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Not So Robust After All: Evaluating the Robustness of Deep Neural
Networks to Unseen Adversarial Attacks [5.024667090792856]
Deep neural networks (DNNs) have gained prominence in various applications, such as classification, recognition, and prediction.
A fundamental attribute of traditional DNNs is their vulnerability to modifications in input data, which has resulted in the investigation of adversarial attacks.
This study aims to challenge the efficacy and generalization of contemporary defense mechanisms against adversarial attacks.
arXiv Detail & Related papers (2023-08-12T05:21:34Z) - Adversarial Detection by Approximation of Ensemble Boundary [0.0]
Adversarial attacks lead to defences that are themselves subject to attack.
In this paper, a novel method of detecting adversarial attacks is proposed for an ensemble of Deep Neural Networks (DNNs) solving two-class pattern recognition problems.
arXiv Detail & Related papers (2022-11-18T13:26:57Z) - Be Your Own Neighborhood: Detecting Adversarial Example by the
Neighborhood Relations Built on Self-Supervised Learning [64.78972193105443]
This paper presents a novel AE detection framework, named trustworthy for predictions.
performs the detection by distinguishing the AE's abnormal relation with its augmented versions.
An off-the-shelf Self-Supervised Learning (SSL) model is used to extract the representation and predict the label.
arXiv Detail & Related papers (2022-08-31T08:18:44Z) - Detecting and Recovering Adversarial Examples from Extracting Non-robust
and Highly Predictive Adversarial Perturbations [15.669678743693947]
adversarial examples (AEs) are maliciously designed to fool target models.
Deep neural networks (DNNs) have been shown to be vulnerable against adversarial examples.
We propose a model-free AEs detection method, the whole process of which is free from querying the victim model.
arXiv Detail & Related papers (2022-06-30T08:48:28Z) - On the Minimal Adversarial Perturbation for Deep Neural Networks with
Provable Estimation Error [65.51757376525798]
The existence of adversarial perturbations has opened an interesting research line on provable robustness.
No provable results have been presented to estimate and bound the error committed.
This paper proposes two lightweight strategies to find the minimal adversarial perturbation.
The obtained results show that the proposed strategies approximate the theoretical distance and robustness for samples close to the classification, leading to provable guarantees against any adversarial attacks.
arXiv Detail & Related papers (2022-01-04T16:40:03Z) - Detect and Defense Against Adversarial Examples in Deep Learning using
Natural Scene Statistics and Adaptive Denoising [12.378017309516965]
We propose a framework for defending DNN against ad-versarial samples.
The detector aims to detect AEs bycharacterizing them through the use of natural scenestatistic.
The proposed method outperforms the state-of-the-art defense techniques.
arXiv Detail & Related papers (2021-07-12T23:45:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z) - Hold me tight! Influence of discriminative features on deep network
boundaries [63.627760598441796]
We propose a new perspective that relates dataset features to the distance of samples to the decision boundary.
This enables us to carefully tweak the position of the training samples and measure the induced changes on the boundaries of CNNs trained on large-scale vision datasets.
arXiv Detail & Related papers (2020-02-15T09:29:36Z) - Robustness of Bayesian Neural Networks to Gradient-Based Attacks [9.966113038850946]
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications.
We show that vulnerability to gradient-based attacks arises as a result of degeneracy in the data distribution.
We demonstrate that in the limit BNN posteriors are robust to gradient-based adversarial attacks.
arXiv Detail & Related papers (2020-02-11T13:03:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.