Exploring the Vulnerability of Deep Neural Networks: A Study of
Parameter Corruption
- URL: http://arxiv.org/abs/2006.05620v2
- Date: Thu, 10 Dec 2020 06:02:51 GMT
- Title: Exploring the Vulnerability of Deep Neural Networks: A Study of
Parameter Corruption
- Authors: Xu Sun, Zhiyuan Zhang, Xuancheng Ren, Ruixuan Luo, Liangyou Li
- Abstract summary: We propose an indicator to measure the robustness of neural network parameters by exploiting their vulnerability via parameter corruption.
For practical purposes, we give a gradient-based estimation, which is far more effective than random corruption trials.
- Score: 40.76024057426747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We argue that the vulnerability of model parameters is of crucial value to
the study of model robustness and generalization but little research has been
devoted to understanding this matter. In this work, we propose an indicator to
measure the robustness of neural network parameters by exploiting their
vulnerability via parameter corruption. The proposed indicator describes the
maximum loss variation in the non-trivial worst-case scenario under parameter
corruption. For practical purposes, we give a gradient-based estimation, which
is far more effective than random corruption trials that can hardly induce the
worst accuracy degradation. Equipped with theoretical support and empirical
validation, we are able to systematically investigate the robustness of
different model parameters and reveal vulnerability of deep neural networks
that has been rarely paid attention to before. Moreover, we can enhance the
models accordingly with the proposed adversarial corruption-resistant training,
which not only improves the parameter robustness but also translates into
accuracy elevation.
Related papers
- Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis [25.993502776271022]
Having a large parameter space is considered one of the main suspects of the neural networks' vulnerability to adversarial example.
Previous research has demonstrated that depending on the considered model, the algorithm employed to generate adversarial examples may not function properly.
arXiv Detail & Related papers (2024-06-14T14:47:06Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - Adversarial Parameter Defense by Multi-Step Risk Minimization [22.25435138723355]
We introduce the concept of parameter corruption and propose a multi-step adversarial corruption algorithm.
We show that the proposed algorithm can improve both the parameter robustness and accuracy of neural networks.
arXiv Detail & Related papers (2021-09-07T06:13:32Z) - Pruning in the Face of Adversaries [0.0]
We evaluate the impact of neural network pruning on the adversarial robustness against L-0, L-2 and L-infinity attacks.
Our results confirm that neural network pruning and adversarial robustness are not mutually exclusive.
We extend our analysis to situations that incorporate additional assumptions on the adversarial scenario and show that depending on the situation, different strategies are optimal.
arXiv Detail & Related papers (2021-08-19T09:06:16Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - High-Robustness, Low-Transferability Fingerprinting of Neural Networks [78.2527498858308]
This paper proposes Characteristic Examples for effectively fingerprinting deep neural networks.
It features high-robustness to the base model against model pruning as well as low-transferability to unassociated models.
arXiv Detail & Related papers (2021-05-14T21:48:23Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.