Neural Network Verification in Control
- URL: http://arxiv.org/abs/2110.01388v1
- Date: Thu, 30 Sep 2021 22:28:44 GMT
- Title: Neural Network Verification in Control
- Authors: Michael Everett
- Abstract summary: This tutorial first introduces and unifies recent techniques for verifying robustness properties of NNs.
The techniques are then extended to provide formal guarantees of neural feedback loops.
The provided tools are shown to enable closed-loop reachability analysis and robust deep reinforcement learning.
- Score: 3.3123634393237706
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning-based methods could provide solutions to many of the long-standing
challenges in control. However, the neural networks (NNs) commonly used in
modern learning approaches present substantial challenges for analyzing the
resulting control systems' safety properties. Fortunately, a new body of
literature could provide tractable methods for analysis and verification of
these high dimensional, highly nonlinear representations. This tutorial first
introduces and unifies recent techniques (many of which originated in the
computer vision and machine learning communities) for verifying robustness
properties of NNs. The techniques are then extended to provide formal
guarantees of neural feedback loops (e.g., closed-loop system with NN control
policy). The provided tools are shown to enable closed-loop reachability
analysis and robust deep reinforcement learning.
Related papers
- Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Reachability Analysis of a General Class of Neural Ordinary Differential
Equations [7.774796410415387]
Continuous deep learning models, referred to as Neural Ordinary Differential Equations (Neural ODEs), have received considerable attention over the last several years.
Despite their burgeoning impact, there is a lack of formal analysis techniques for these systems.
We introduce a novel reachability framework that allows for the formal analysis of their behavior.
arXiv Detail & Related papers (2022-07-13T22:05:52Z) - Imbedding Deep Neural Networks [0.0]
Continuous depth neural networks, such as Neural ODEs, have refashioned the understanding of residual neural networks in terms of non-linear vector-valued optimal control problems.
We propose a new approach which explicates the network's depth' as a fundamental variable, thus reducing the problem to a system of forward-facing initial value problems.
arXiv Detail & Related papers (2022-01-31T22:00:41Z) - Provable Regret Bounds for Deep Online Learning and Control [77.77295247296041]
We show that any loss functions can be adapted to optimize the parameters of a neural network such that it competes with the best net in hindsight.
As an application of these results in the online setting, we obtain provable bounds for online control controllers.
arXiv Detail & Related papers (2021-10-15T02:13:48Z) - OVERT: An Algorithm for Safety Verification of Neural Network Control
Policies for Nonlinear Systems [31.3812947670948]
We present OVERT: a sound algorithm for safety verification of neural network control policies.
The central concept of OVERT is to abstract nonlinear functions with a set of optimally tight piecewise linear bounds.
Overt compares favorably to existing methods both in time and in tightness of the reachable set.
arXiv Detail & Related papers (2021-08-03T00:41:27Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural
Network Controllers via Semidefinite Programming [19.51345816555571]
We propose a novel forward reachability analysis method for the safety verification of linear time-varying systems with neural networks in feedback.
We show that we can compute these approximate reachable sets using semidefinite programming.
We illustrate our method in a quadrotor example, in which we first approximate a nonlinear model predictive controller via a deep neural network and then apply our analysis tool to certify finite-time reachability and constraint satisfaction of the closed-loop system.
arXiv Detail & Related papers (2020-04-16T18:48:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.