An Estimator for the Sensitivity to Perturbations of Deep Neural
Networks
- URL: http://arxiv.org/abs/2307.12679v1
- Date: Mon, 24 Jul 2023 10:33:32 GMT
- Title: An Estimator for the Sensitivity to Perturbations of Deep Neural
Networks
- Authors: Naman Maheshwari, Nicholas Malaya, Scott Moe, Jaydeep P. Kulkarni,
Sudhanva Gurumurthi
- Abstract summary: This paper derives an estimator that can predict the sensitivity of a given Deep Neural Network to perturbations in input.
An approximation of the estimator is tested on two Convolutional Neural Networks, AlexNet and VGG-19, using the ImageNet dataset.
- Score: 0.31498833540989407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For Deep Neural Networks (DNNs) to become useful in safety-critical
applications, such as self-driving cars and disease diagnosis, they must be
stable to perturbations in input and model parameters. Characterizing the
sensitivity of a DNN to perturbations is necessary to determine minimal
bit-width precision that may be used to safely represent the network. However,
no general result exists that is capable of predicting the sensitivity of a
given DNN to round-off error, noise, or other perturbations in input. This
paper derives an estimator that can predict such quantities. The estimator is
derived via inequalities and matrix norms, and the resulting quantity is
roughly analogous to a condition number for the entire neural network. An
approximation of the estimator is tested on two Convolutional Neural Networks,
AlexNet and VGG-19, using the ImageNet dataset. For each of these networks, the
tightness of the estimator is explored via random perturbations and adversarial
attacks.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Certified Invertibility in Neural Networks via Mixed-Integer Programming [16.64960701212292]
Neural networks are known to be vulnerable to adversarial attacks.
There may exist large, meaningful perturbations that do not affect the network's decision.
We discuss how our findings can be useful for invertibility certification in transformations between neural networks.
arXiv Detail & Related papers (2023-01-27T15:40:38Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - An out-of-distribution discriminator based on Bayesian neural network
epistemic uncertainty [0.19573380763700712]
Bayesian neural networks (BNNs) are an important type of neural network with built-in capability for quantifying uncertainty.
This paper discusses aleatoric and epistemic uncertainty in BNNs and how they can be calculated.
arXiv Detail & Related papers (2022-10-18T21:15:33Z) - On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks [91.3755431537592]
We study how random pruning of the weights affects a neural network's neural kernel (NTK)
In particular, this work establishes an equivalence of the NTKs between a fully-connected neural network and its randomly pruned version.
arXiv Detail & Related papers (2022-03-27T15:22:19Z) - The Compact Support Neural Network [6.47243430672461]
We present a neuron generalization that has the standard dot-product-based neuron and the RBF neuron as two extreme cases of a shape parameter.
We show how to avoid difficulties in training a neural network with such neurons, by starting with a trained standard neural network and gradually increasing the shape parameter to the desired value.
arXiv Detail & Related papers (2021-04-01T06:08:09Z) - Performance Bounds for Neural Network Estimators: Applications in Fault
Detection [2.388501293246858]
We exploit recent results in quantifying the robustness of neural networks to construct and tune a model-based anomaly detector.
In tuning, we specifically provide upper bounds on the rate of false alarms expected under normal operation.
arXiv Detail & Related papers (2021-03-22T19:23:08Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.