Architectural Resilience to Foreground-and-Background Adversarial Noise
- URL: http://arxiv.org/abs/2003.10045v2
- Date: Sun, 7 Jun 2020 05:28:09 GMT
- Title: Architectural Resilience to Foreground-and-Background Adversarial Noise
- Authors: Carl Cheng, Evan Hu
- Abstract summary: Adrial attacks in the form of imperceptible perturbations of normal images have been extensively studied.
We propose distinct model-agnostic benchmark perturbations of images to investigate resilience and robustness of different network architectures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks in the form of imperceptible perturbations of normal
images have been extensively studied, and for every new defense methodology
created, multiple adversarial attacks are found to counteract it. In
particular, a popular style of attack, exemplified in recent years by DeepFool
and Carlini-Wagner, relies solely on white-box scenarios in which full access
to the predictive model and its weights are required. In this work, we instead
propose distinct model-agnostic benchmark perturbations of images in order to
investigate the resilience and robustness of different network architectures.
Results empirically determine that increasing depth within most types of
Convolutional Neural Networks typically improves model resilience towards
general attacks, with improvement steadily decreasing as the model becomes
deeper. Additionally, we find that a notable difference in adversarial
robustness exists between residual architectures with skip connections and
non-residual architectures of similar complexity. Our findings provide
direction for future understanding of residual connections and depth on network
robustness.
Related papers
- Towards Evaluating the Robustness of Visual State Space Models [63.14954591606638]
Vision State Space Models (VSSMs) have demonstrated remarkable performance in visual perception tasks.
However, their robustness under natural and adversarial perturbations remains a critical concern.
We present a comprehensive evaluation of VSSMs' robustness under various perturbation scenarios.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Defensive Tensorization [113.96183766922393]
We propose tensor defensiveization, an adversarial defence technique that leverages a latent high-order factorization of the network.
We empirically demonstrate the effectiveness of our approach on standard image classification benchmarks.
We validate the versatility of our approach across domains and low-precision architectures by considering an audio task and binary networks.
arXiv Detail & Related papers (2021-10-26T17:00:16Z) - Pruning in the Face of Adversaries [0.0]
We evaluate the impact of neural network pruning on the adversarial robustness against L-0, L-2 and L-infinity attacks.
Our results confirm that neural network pruning and adversarial robustness are not mutually exclusive.
We extend our analysis to situations that incorporate additional assumptions on the adversarial scenario and show that depending on the situation, different strategies are optimal.
arXiv Detail & Related papers (2021-08-19T09:06:16Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Multi-objective Search of Robust Neural Architectures against Multiple
Types of Adversarial Attacks [18.681859032630374]
deep learning models are vulnerable to adversarial examples that are imperceptible to humans.
It is practically impossible to predict beforehand which type of attacks a machine learn model may suffer from.
We propose to search for deep neural architectures that are robust to five types of well-known adversarial attacks using a multi-objective evolutionary algorithm.
arXiv Detail & Related papers (2021-01-16T19:38:16Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Interpolation between Residual and Non-Residual Networks [24.690238357686134]
We present a novel ODE model by adding a damping term.
It can be shown that the proposed model can recover both a ResNet and a CNN by adjusting an coefficient.
Experiments on a number of image classification benchmarks show that the proposed model substantially improves the accuracy of ResNet and ResNeXt.
arXiv Detail & Related papers (2020-06-10T09:36:38Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.