Adversarial Robustness Guarantees for Random Deep Neural Networks
- URL: http://arxiv.org/abs/2004.05923v2
- Date: Thu, 22 Jul 2021 13:53:02 GMT
- Title: Adversarial Robustness Guarantees for Random Deep Neural Networks
- Authors: Giacomo De Palma, Bobak T. Kiani and Seth Lloyd
- Abstract summary: adversarial examples are incorrectly classified inputs that are extremely close to a correctly classified input.
We prove that for any $pge1$, the $ellp$ distance of any given input from the classification boundary scales as one over the square root of the dimension of the input times the $ellp$ norm of the input.
The results constitute a fundamental advance in the theoretical understanding of adversarial examples, and open the way to a thorough theoretical characterization of the relation between network architecture and robustness to adversarial perturbations.
- Score: 15.68430580530443
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The reliability of deep learning algorithms is fundamentally challenged by
the existence of adversarial examples, which are incorrectly classified inputs
that are extremely close to a correctly classified input. We explore the
properties of adversarial examples for deep neural networks with random weights
and biases, and prove that for any $p\ge1$, the $\ell^p$ distance of any given
input from the classification boundary scales as one over the square root of
the dimension of the input times the $\ell^p$ norm of the input. The results
are based on the recently proved equivalence between Gaussian processes and
deep neural networks in the limit of infinite width of the hidden layers, and
are validated with experiments on both random deep neural networks and deep
neural networks trained on the MNIST and CIFAR10 datasets. The results
constitute a fundamental advance in the theoretical understanding of
adversarial examples, and open the way to a thorough theoretical
characterization of the relation between network architecture and robustness to
adversarial perturbations.
Related papers
- Towards unlocking the mystery of adversarial fragility of neural networks [6.589200529058999]
We look at the smallest magnitude of possible additive perturbations that can change the output of a classification algorithm.
We provide a matrix-theoretic explanation of the adversarial fragility of deep neural network for classification.
arXiv Detail & Related papers (2024-06-23T19:37:13Z) - Compositional Curvature Bounds for Deep Neural Networks [7.373617024876726]
A key challenge that threatens the widespread use of neural networks in safety-critical applications is their vulnerability to adversarial attacks.
We study the second-order behavior of continuously differentiable deep neural networks, focusing on robustness against adversarial perturbations.
We introduce a novel algorithm to analytically compute provable upper bounds on the second derivative of neural networks.
arXiv Detail & Related papers (2024-06-07T17:50:15Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Neural Network Pruning as Spectrum Preserving Process [7.386663473785839]
We identify the close connection between matrix spectrum learning and neural network training for dense and convolutional layers.
We propose a matrix sparsification algorithm tailored for neural network pruning that yields better pruning result.
arXiv Detail & Related papers (2023-07-18T05:39:32Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - On the uncertainty principle of neural networks [4.014046905033123]
We show that the accuracy-robustness trade-off is an intrinsic property whose underlying mechanism is deeply related to the uncertainty principle in quantum mechanics.
We find that for a neural network to be both accurate and robust, it needs to resolve the features of the two parts $x$ (the inputs) and $Delta$ (the derivatives of the normalized loss function $J$ with respect to $x$)
arXiv Detail & Related papers (2022-05-03T13:48:12Z) - Robustness Certificates for Implicit Neural Networks: A Mixed Monotone
Contractive Approach [60.67748036747221]
Implicit neural networks offer competitive performance and reduced memory consumption.
They can remain brittle with respect to input adversarial perturbations.
This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks.
arXiv Detail & Related papers (2021-12-10T03:08:55Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.