Reachability Analysis of Neural Network Control Systems
- URL: http://arxiv.org/abs/2301.12100v1
- Date: Sat, 28 Jan 2023 05:57:37 GMT
- Title: Reachability Analysis of Neural Network Control Systems
- Authors: Chi Zhang, Wenjie Ruan, Peipei Xu
- Abstract summary: Existing verification approaches for neural network control systems (NNCSs) can only work on a limited type of activation functions.
This paper proposes a verification framework for NNCS based on Lipschitzian optimisation, called DeepNNC.
DeepNNC shows superior performance in terms of efficiency and accuracy over a wide range of NNCs.
- Score: 10.023618778236697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network controllers (NNCs) have shown great promise in autonomous and
cyber-physical systems. Despite the various verification approaches for neural
networks, the safety analysis of NNCs remains an open problem. Existing
verification approaches for neural network control systems (NNCSs) either can
only work on a limited type of activation functions, or result in non-trivial
over-approximation errors with time evolving. This paper proposes a
verification framework for NNCS based on Lipschitzian optimisation, called
DeepNNC. We first prove the Lipschitz continuity of closed-loop NNCSs by
unrolling and eliminating the loops. We then reveal the working principles of
applying Lipschitzian optimisation on NNCS verification and illustrate it by
verifying an adaptive cruise control model. Compared to state-of-the-art
verification approaches, DeepNNC shows superior performance in terms of
efficiency and accuracy over a wide range of NNCs. We also provide a case study
to demonstrate the capability of DeepNNC to handle a real-world, practical, and
complex system. Our tool \textbf{DeepNNC} is available at
\url{https://github.com/TrustAI/DeepNNC}.
Related papers
- Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Provably Safe Neural Network Controllers via Differential Dynamic Logic [2.416907802598482]
We present the first general approach that allows reusing control theory results for NNCS verification.
Based on provably safe control envelopes in dL, we derive specifications for the NN which is proven via NN verification.
We show that a proof of the NN adhering to the specification is mirrored by a dL proof on the infinite-time safety of the NNCS.
arXiv Detail & Related papers (2024-02-16T16:15:25Z) - An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv Detail & Related papers (2023-07-29T06:27:28Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Model-Agnostic Reachability Analysis on Deep Neural Networks [25.54542656637704]
We develop a model-agnostic verification framework, called DeepAgn.
It can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both.
It does not require access to the network's internal structures, such as layers and parameters.
arXiv Detail & Related papers (2023-04-03T09:01:59Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Certification of Iterative Predictions in Bayesian Neural Networks [79.15007746660211]
We compute lower bounds for the probability that trajectories of the BNN model reach a given set of states while avoiding a set of unsafe states.
We use the lower bounds in the context of control and reinforcement learning to provide safety certification for given control policies.
arXiv Detail & Related papers (2021-05-21T05:23:57Z) - Continuous Safety Verification of Neural Networks [1.7056768055368385]
This paper considers approaches to transfer results established in the previous DNN safety verification problem to a modified problem setting.
The overall concept is evaluated in a $1/10$-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.
arXiv Detail & Related papers (2020-10-12T13:28:04Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - NNV: The Neural Network Verification Tool for Deep Neural Networks and
Learning-Enabled Cyber-Physical Systems [14.04358575326059]
The paper presents a set-based verification framework for deep neural networks (DNNs) and learning-enabled cyber-physical systems (CPS)
The crux of NNV is a collection of reachability algorithms that make use of a variety of set representations, such as polyhedra, star sets, zonotopes, and abstract-domain representations.
We evaluate NNV using two real-world case studies: the first is safety verification of ACAS Xu networks and the second is safety verification of a deep learning-based adaptive cruise control system.
arXiv Detail & Related papers (2020-04-12T01:29:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.