NNV: The Neural Network Verification Tool for Deep Neural Networks and
Learning-Enabled Cyber-Physical Systems
- URL: http://arxiv.org/abs/2004.05519v1
- Date: Sun, 12 Apr 2020 01:29:58 GMT
- Title: NNV: The Neural Network Verification Tool for Deep Neural Networks and
Learning-Enabled Cyber-Physical Systems
- Authors: Hoang-Dung Tran, Xiaodong Yang, Diego Manzanas Lopez, Patrick Musau,
Luan Viet Nguyen, Weiming Xiang, Stanley Bak and Taylor T. Johnson
- Abstract summary: The paper presents a set-based verification framework for deep neural networks (DNNs) and learning-enabled cyber-physical systems (CPS)
The crux of NNV is a collection of reachability algorithms that make use of a variety of set representations, such as polyhedra, star sets, zonotopes, and abstract-domain representations.
We evaluate NNV using two real-world case studies: the first is safety verification of ACAS Xu networks and the second is safety verification of a deep learning-based adaptive cruise control system.
- Score: 14.04358575326059
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This paper presents the Neural Network Verification (NNV) software tool, a
set-based verification framework for deep neural networks (DNNs) and
learning-enabled cyber-physical systems (CPS). The crux of NNV is a collection
of reachability algorithms that make use of a variety of set representations,
such as polyhedra, star sets, zonotopes, and abstract-domain representations.
NNV supports both exact (sound and complete) and over-approximate (sound)
reachability algorithms for verifying safety and robustness properties of
feed-forward neural networks (FFNNs) with various activation functions. For
learning-enabled CPS, such as closed-loop control systems incorporating neural
networks, NNV provides exact and over-approximate reachability analysis schemes
for linear plant models and FFNN controllers with piecewise-linear activation
functions, such as ReLUs. For similar neural network control systems (NNCS)
that instead have nonlinear plant models, NNV supports over-approximate
analysis by combining the star set analysis used for FFNN controllers with
zonotope-based analysis for nonlinear plant dynamics building on CORA. We
evaluate NNV using two real-world case studies: the first is safety
verification of ACAS Xu networks and the second deals with the safety
verification of a deep learning-based adaptive cruise control system.
Related papers
- A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Model-Agnostic Reachability Analysis on Deep Neural Networks [25.54542656637704]
We develop a model-agnostic verification framework, called DeepAgn.
It can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both.
It does not require access to the network's internal structures, such as layers and parameters.
arXiv Detail & Related papers (2023-04-03T09:01:59Z) - A Neurosymbolic Approach to the Verification of Temporal Logic
Properties of Learning enabled Control Systems [0.0]
We present a model for the verification of Neural Network (NN) controllers for general STL specifications.
We also propose a new approach for neural network controllers with general activation functions.
arXiv Detail & Related papers (2023-03-07T04:08:33Z) - Reachability Analysis of Neural Network Control Systems [10.023618778236697]
Existing verification approaches for neural network control systems (NNCSs) can only work on a limited type of activation functions.
This paper proposes a verification framework for NNCS based on Lipschitzian optimisation, called DeepNNC.
DeepNNC shows superior performance in terms of efficiency and accuracy over a wide range of NNCs.
arXiv Detail & Related papers (2023-01-28T05:57:37Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - Deep Kronecker neural networks: A general framework for neural networks
with adaptive activation functions [4.932130498861987]
We propose a new type of neural networks, Kronecker neural networks (KNNs), that form a general framework for neural networks with adaptive activation functions.
Under suitable conditions, KNNs induce a faster decay of the loss than that by the feed-forward networks.
arXiv Detail & Related papers (2021-05-20T04:54:57Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.