Enhance DNN Adversarial Robustness and Efficiency via Injecting Noise to
Non-Essential Neurons
- URL: http://arxiv.org/abs/2402.04325v1
- Date: Tue, 6 Feb 2024 19:09:32 GMT
- Title: Enhance DNN Adversarial Robustness and Efficiency via Injecting Noise to
Non-Essential Neurons
- Authors: Zhenyu Liu, Garrett Gagnon, Swagath Venkataramani, Liu Liu
- Abstract summary: We introduce an effective method designed to simultaneously enhance adversarial robustness and execution efficiency.
Unlike prior studies that enhance robustness via uniformly injecting noise, we introduce a non-uniform noise injection algorithm.
By employing approximation techniques, our approach identifies and protects essential neurons while strategically introducing noise into non-essential neurons.
- Score: 9.404025805661947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have revolutionized a wide range of industries,
from healthcare and finance to automotive, by offering unparalleled
capabilities in data analysis and decision-making. Despite their transforming
impact, DNNs face two critical challenges: the vulnerability to adversarial
attacks and the increasing computational costs associated with more complex and
larger models. In this paper, we introduce an effective method designed to
simultaneously enhance adversarial robustness and execution efficiency. Unlike
prior studies that enhance robustness via uniformly injecting noise, we
introduce a non-uniform noise injection algorithm, strategically applied at
each DNN layer to disrupt adversarial perturbations introduced in attacks. By
employing approximation techniques, our approach identifies and protects
essential neurons while strategically introducing noise into non-essential
neurons. Our experimental results demonstrate that our method successfully
enhances both robustness and efficiency across several attack scenarios, model
architectures, and datasets.
Related papers
- Adversarially Robust Spiking Neural Networks Through Conversion [16.2319630026996]
Spiking neural networks (SNNs) provide an energy-efficient alternative to a variety of artificial neural network (ANN) based AI applications.
As the progress in neuromorphic computing with SNNs expands their use in applications, the problem of adversarial robustness of SNNs becomes more pronounced.
arXiv Detail & Related papers (2023-11-15T08:33:46Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - R-SNN: An Analysis and Design Methodology for Robustifying Spiking
Neural Networks against Adversarial Attacks through Noise Filters for Dynamic
Vision Sensors [15.093607722961407]
Spiking Neural Networks (SNNs) aim at providing energy-efficient learning capabilities when implemented on neuromorphic chips with event-based Dynamic Vision Sensors (DVS)
This paper studies the robustness of SNNs against adversarial attacks on such DVS-based systems, and proposes R-SNN, a novel methodology for robustifying SNNs through efficient noise filtering.
arXiv Detail & Related papers (2021-09-01T14:40:04Z) - Robust Learning of Recurrent Neural Networks in Presence of Exogenous
Noise [22.690064709532873]
We propose a tractable robustness analysis for RNN models subject to input noise.
The robustness measure can be estimated efficiently using linearization techniques.
Our proposed methodology significantly improves robustness of recurrent neural networks.
arXiv Detail & Related papers (2021-05-03T16:45:05Z) - Towards Robust Neural Networks via Orthogonal Diversity [30.77473391842894]
A series of methods represented by the adversarial training and its variants have proven as one of the most effective techniques in enhancing the Deep Neural Networks robustness.
This paper proposes a novel defense that aims at augmenting the model in order to learn features that are adaptive to diverse inputs, including adversarial examples.
In this way, the proposed DIO augments the model and enhances the robustness of DNN itself as the learned features can be corrected by these mutually-orthogonal paths.
arXiv Detail & Related papers (2020-10-23T06:40:56Z) - Evaluation of Adversarial Training on Different Types of Neural Networks
in Deep Learning-based IDSs [3.8073142980733]
We focus on investigating the effectiveness of different evasion attacks and how to train a resilience deep learning-based IDS.
We use the min-max approach to formulate the problem of training robust IDS against adversarial examples.
Our experiments on different deep learning algorithms and different benchmark datasets demonstrate that defense using an adversarial training-based min-max approach improves the robustness against the five well-known adversarial attack methods.
arXiv Detail & Related papers (2020-07-08T23:33:30Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.