Pruning in the Face of Adversaries
- URL: http://arxiv.org/abs/2108.08560v1
- Date: Thu, 19 Aug 2021 09:06:16 GMT
- Title: Pruning in the Face of Adversaries
- Authors: Florian Merkle, Maximilian Samsinger, Pascal Sch\"ottle
- Abstract summary: We evaluate the impact of neural network pruning on the adversarial robustness against L-0, L-2 and L-infinity attacks.
Our results confirm that neural network pruning and adversarial robustness are not mutually exclusive.
We extend our analysis to situations that incorporate additional assumptions on the adversarial scenario and show that depending on the situation, different strategies are optimal.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The vulnerability of deep neural networks against adversarial examples -
inputs with small imperceptible perturbations - has gained a lot of attention
in the research community recently. Simultaneously, the number of parameters of
state-of-the-art deep learning models has been growing massively, with
implications on the memory and computational resources required to train and
deploy such models. One approach to control the size of neural networks is
retrospectively reducing the number of parameters, so-called neural network
pruning. Available research on the impact of neural network pruning on the
adversarial robustness is fragmentary and often does not adhere to established
principles of robustness evaluation. We close this gap by evaluating the
robustness of pruned models against L-0, L-2 and L-infinity attacks for a wide
range of attack strengths, several architectures, data sets, pruning methods,
and compression rates. Our results confirm that neural network pruning and
adversarial robustness are not mutually exclusive. Instead, sweet spots can be
found that are favorable in terms of model size and adversarial robustness.
Furthermore, we extend our analysis to situations that incorporate additional
assumptions on the adversarial scenario and show that depending on the
situation, different strategies are optimal.
Related papers
- Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis [25.993502776271022]
Having a large parameter space is considered one of the main suspects of the neural networks' vulnerability to adversarial example.
Previous research has demonstrated that depending on the considered model, the algorithm employed to generate adversarial examples may not function properly.
arXiv Detail & Related papers (2024-06-14T14:47:06Z) - Chaos Theory and Adversarial Robustness [0.0]
This paper uses ideas from Chaos Theory to explain, analyze, and quantify the degree to which neural networks are susceptible to or robust against adversarial attacks.
We present a new metric, the "susceptibility ratio," given by $hat Psi(h, theta)$, which captures how greatly a model's output will be changed by perturbations to a given input.
arXiv Detail & Related papers (2022-10-20T03:39:44Z) - Membership Inference Attacks and Defenses in Neural Network Pruning [5.856147967309101]
We conduct the first analysis of privacy risks in neural network pruning.
Specifically, we investigate the impacts of neural network pruning on training data privacy.
We propose a new defense mechanism to protect the pruning process by mitigating the prediction divergence.
arXiv Detail & Related papers (2022-02-07T16:31:53Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Improving Adversarial Robustness by Enforcing Local and Global
Compactness [19.8818435601131]
Adversary training is the most successful method that consistently resists a wide range of attacks.
We propose the Adversary Divergence Reduction Network which enforces local/global compactness and the clustering assumption.
The experimental results demonstrate that augmenting adversarial training with our proposed components can further improve the robustness of the network.
arXiv Detail & Related papers (2020-07-10T00:43:06Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.