Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
- URL: http://arxiv.org/abs/2201.12347v1
- Date: Mon, 31 Jan 2022 14:34:07 GMT
- Title: Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
- Authors: Chandresh Pravin, Ivan Martino, Giuseppe Nicosia, Varun Ojha
- Abstract summary: We identify fragile and robust neurons of deep learning architectures using nodal dropouts of the first convolutional layer.
We correlate these neurons with the distribution of adversarial attacks on the network.
- Score: 0.6899744489931016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We identify fragile and robust neurons of deep learning architectures using
nodal dropouts of the first convolutional layer. Using an adversarial targeting
algorithm, we correlate these neurons with the distribution of adversarial
attacks on the network. Adversarial robustness of neural networks has gained
significant attention in recent times and highlights intrinsic weaknesses of
deep learning networks against carefully constructed distortion applied to
input images. In this paper, we evaluate the robustness of state-of-the-art
image classification models trained on the MNIST and CIFAR10 datasets against
the fast gradient sign method attack, a simple yet effective method of
deceiving neural networks. Our method identifies the specific neurons of a
network that are most affected by the adversarial attack being applied. We,
therefore, propose to make fragile neurons more robust against these attacks by
compressing features within robust neurons and amplifying the fragile neurons
proportionally.
Related papers
- Seeking Next Layer Neurons' Attention for Error-Backpropagation-Like
Training in a Multi-Agent Network Framework [6.446189857311325]
We propose a local objective for neurons that align them to exhibit similarities to error-backpropagation.
We examine a neural network comprising decentralized, self-interested neurons seeking to maximize their local objective.
We demonstrate the learning capacity of these multi-agent neural networks through experiments on three datasets.
arXiv Detail & Related papers (2023-10-15T21:07:09Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Spike timing reshapes robustness against attacks in spiking neural
networks [21.983346771962566]
spiking neural networks (SNNs) are emerging as a new type of neural network model.
We explore the role of spike timing in SNNs, focusing on the robustness of the system against various types of attacks.
Our results suggest that the utility of spike timing coding in SNNs could improve the robustness against attacks.
arXiv Detail & Related papers (2023-06-09T03:48:57Z) - Visual Analytics of Neuron Vulnerability to Adversarial Attacks on
Convolutional Neural Networks [28.081328051535618]
Adversarial attacks on a convolutional neural network (CNN) could fool a high-performance CNN into making incorrect predictions.
Our work introduces a visual analytics approach to understanding adversarial attacks.
A visual analytics system is designed to incorporate visual reasoning for interpreting adversarial attacks.
arXiv Detail & Related papers (2023-03-06T01:01:56Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Improving Adversarial Transferability via Neuron Attribution-Based
Attacks [35.02147088207232]
We propose the Neuron-based Attack (NAA), which conducts feature-level attacks with more accurate neuron importance estimations.
We derive an approximation scheme of neuron attribution to tremendously reduce the overhead.
Experiments confirm the superiority of our approach to the state-of-the-art benchmarks.
arXiv Detail & Related papers (2022-03-31T13:47:30Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.