Non-Singular Adversarial Robustness of Neural Networks
- URL: http://arxiv.org/abs/2102.11935v1
- Date: Tue, 23 Feb 2021 20:59:30 GMT
- Title: Non-Singular Adversarial Robustness of Neural Networks
- Authors: Yu-Lin Tsai, Chia-Yi Hsu, Chia-Mu Yu, Pin-Yu Chen
- Abstract summary: Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
- Score: 58.731070632586594
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial robustness has become an emerging challenge for neural network
owing to its over-sensitivity to small input perturbations. While being
critical, we argue that solving this singular issue alone fails to provide a
comprehensive robustness assessment. Even worse, the conclusions drawn from
singular robustness may give a false sense of overall model robustness.
Specifically, our findings show that adversarially trained models that are
robust to input perturbations are still (or even more) vulnerable to weight
perturbations when compared to standard models. In this paper, we formalize the
notion of non-singular adversarial robustness for neural networks through the
lens of joint perturbations to data inputs as well as model weights. To our
best knowledge, this study is the first work considering simultaneous
input-weight adversarial perturbations. Based on a multi-layer feed-forward
neural network model with ReLU activation functions and standard classification
loss, we establish error analysis for quantifying the loss sensitivity subject
to $\ell_\infty$-norm bounded perturbations on data inputs and model weights.
Based on the error analysis, we propose novel regularization functions for
robust training and demonstrate improved non-singular robustness against joint
input-weight adversarial perturbations.
Related papers
- Cycle Consistency-based Uncertainty Quantification of Neural Networks in
Inverse Imaging Problems [10.992084413881592]
Uncertainty estimation is critical for numerous applications of deep neural networks.
We show an uncertainty quantification approach for deep neural networks used in inverse problems based on cycle consistency.
arXiv Detail & Related papers (2023-05-22T09:23:18Z) - Chaos Theory and Adversarial Robustness [0.0]
This paper uses ideas from Chaos Theory to explain, analyze, and quantify the degree to which neural networks are susceptible to or robust against adversarial attacks.
We present a new metric, the "susceptibility ratio," given by $hat Psi(h, theta)$, which captures how greatly a model's output will be changed by perturbations to a given input.
arXiv Detail & Related papers (2022-10-20T03:39:44Z) - RoMA: a Method for Neural Network Robustness Measurement and Assessment [0.0]
We present a new statistical method, called Robustness Measurement and Assessment (RoMA)
RoMA determines the probability that a random input perturbation might cause misclassification.
One interesting insight obtained through this work is that, in a classification network, different output labels can exhibit very different robustness levels.
arXiv Detail & Related papers (2021-10-21T12:01:54Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Pruning in the Face of Adversaries [0.0]
We evaluate the impact of neural network pruning on the adversarial robustness against L-0, L-2 and L-infinity attacks.
Our results confirm that neural network pruning and adversarial robustness are not mutually exclusive.
We extend our analysis to situations that incorporate additional assumptions on the adversarial scenario and show that depending on the situation, different strategies are optimal.
arXiv Detail & Related papers (2021-08-19T09:06:16Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Formalizing Generalization and Robustness of Neural Networks to Weight
Perturbations [58.731070632586594]
We provide the first formal analysis for feed-forward neural networks with non-negative monotone activation functions against weight perturbations.
We also design a new theory-driven loss function for training generalizable and robust neural networks against weight perturbations.
arXiv Detail & Related papers (2021-03-03T06:17:03Z) - Do Wider Neural Networks Really Help Adversarial Robustness? [92.8311752980399]
We show that the model robustness is closely related to the tradeoff between natural accuracy and perturbation stability.
We propose a new Width Adjusted Regularization (WAR) method that adaptively enlarges $lambda$ on wide models.
arXiv Detail & Related papers (2020-10-03T04:46:17Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.