Model-Agnostic Reachability Analysis on Deep Neural Networks
- URL: http://arxiv.org/abs/2304.00813v1
- Date: Mon, 3 Apr 2023 09:01:59 GMT
- Title: Model-Agnostic Reachability Analysis on Deep Neural Networks
- Authors: Chi Zhang, Wenjie Ruan, Fu Wang, Peipei Xu, Geyong Min, Xiaowei Huang
- Abstract summary: We develop a model-agnostic verification framework, called DeepAgn.
It can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both.
It does not require access to the network's internal structures, such as layers and parameters.
- Score: 25.54542656637704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Verification plays an essential role in the formal analysis of
safety-critical systems. Most current verification methods have specific
requirements when working on Deep Neural Networks (DNNs). They either target
one particular network category, e.g., Feedforward Neural Networks (FNNs), or
networks with specific activation functions, e.g., RdLU. In this paper, we
develop a model-agnostic verification framework, called DeepAgn, and show that
it can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of
both. Under the assumption of Lipschitz continuity, DeepAgn analyses the
reachability of DNNs based on a novel optimisation scheme with a global
convergence guarantee. It does not require access to the network's internal
structures, such as layers and parameters. Through reachability analysis,
DeepAgn can tackle several well-known robustness problems, including computing
the maximum safe radius for a given input, and generating the ground-truth
adversarial examples. We also empirically demonstrate DeepAgn's superior
capability and efficiency in handling a broader class of deep neural networks,
including both FNNs, and RNNs with very deep layers and millions of neurons,
than other state-of-the-art verification approaches.
Related papers
- An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv Detail & Related papers (2023-07-29T06:27:28Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Reachability Analysis of Neural Network Control Systems [10.023618778236697]
Existing verification approaches for neural network control systems (NNCSs) can only work on a limited type of activation functions.
This paper proposes a verification framework for NNCS based on Lipschitzian optimisation, called DeepNNC.
DeepNNC shows superior performance in terms of efficiency and accuracy over a wide range of NNCs.
arXiv Detail & Related papers (2023-01-28T05:57:37Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - Abstraction and Symbolic Execution of Deep Neural Networks with Bayesian
Approximation of Hidden Features [8.723426955657345]
We propose a novel abstraction method which abstracts a deep neural network and a dataset into a Bayesian network.
We make use of dimensionality reduction techniques to identify hidden features that have been learned by hidden layers of the DNN.
We can derive a runtime monitoring approach to detect in operational time rare inputs.
arXiv Detail & Related papers (2021-03-05T14:28:42Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Disentangling Trainability and Generalization in Deep Neural Networks [45.15453323967438]
We analyze the spectrum of the Neural Tangent Kernel (NTK) for trainability and generalization across a range of networks.
We find that CNNs without global average pooling behave almost identically to FCNs, but that CNNs with pooling have markedly different and often better generalization performance.
arXiv Detail & Related papers (2019-12-30T18:53:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.