Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems
- URL: http://arxiv.org/abs/2209.14076v1
- Date: Wed, 28 Sep 2022 13:17:28 GMT
- Title: Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems
- Authors: Nicholas Rober, Sydney M. Katz, Chelsea Sidrane, Esen Yel, Michael
Everett, Mykel J. Kochenderfer, Jonathan P. How
- Abstract summary: This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
- Score: 59.57462129637796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing prevalence of neural networks (NNs) in safety-critical
applications calls for methods to certify safe behavior. This paper presents a
backward reachability approach for safety verification of neural feedback loops
(NFLs), i.e., closed-loop systems with NN control policies. While recent works
have focused on forward reachability as a strategy for safety certification of
NFLs, backward reachability offers advantages over the forward strategy,
particularly in obstacle avoidance scenarios. Prior works have developed
techniques for backward reachability analysis for systems without NNs, but the
presence of NNs in the feedback loop presents a unique set of problems due to
the nonlinearities in their activation functions and because NN models are
generally not invertible. To overcome these challenges, we use existing forward
NN analysis tools to efficiently find an over-approximation of the
backprojection (BP) set, i.e., the set of states for which the NN control
policy will drive the system to a given target set. We present frameworks for
calculating BP over-approximations for both linear and nonlinear systems with
control policies represented by feedforward NNs and propose computationally
efficient strategies. We use numerical results from a variety of models to
showcase the proposed algorithms, including a demonstration of safety
certification for a 6D system.
Related papers
- Neural Network Verification with Branch-and-Bound for General Nonlinearities [63.39918329535165]
Branch-and-bound (BaB) is among the most effective techniques for neural network (NN) verification.
We develop a general framework, named GenBaB, to conduct BaB on general nonlinearities to verify NNs with general architectures.
We demonstrate the effectiveness of our GenBaB on verifying a wide range of NNs, including NNs with activation functions such as Sigmoid, Tanh, Sine and GeLU.
arXiv Detail & Related papers (2024-05-31T17:51:07Z) - Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Provably Safe Neural Network Controllers via Differential Dynamic Logic [2.416907802598482]
We present the first general approach that allows reusing control theory results for NNCS verification.
Based on provably safe control envelopes in dL, we derive specifications for the NN which is proven via NN verification.
We show that a proof of the NN adhering to the specification is mirrored by a dL proof on the infinite-time safety of the NNCS.
arXiv Detail & Related papers (2024-02-16T16:15:25Z) - Probabilistic Reach-Avoid for Bayesian Neural Networks [71.67052234622781]
We show that an optimal synthesis algorithm can provide more than a four-fold increase in the number of certifiable states.
The algorithm is able to provide more than a three-fold increase in the average guaranteed reach-avoid probability.
arXiv Detail & Related papers (2023-10-03T10:52:21Z) - Safety Filter Design for Neural Network Systems via Convex Optimization [35.87465363928146]
We propose a novel safety filter that relies on convex optimization to ensure safety for a neural network (NN) system.
We demonstrate the efficacy of the proposed framework numerically on a nonlinear pendulum system.
arXiv Detail & Related papers (2023-08-16T01:30:13Z) - Automated Repair of Neural Networks [0.26651200086513094]
We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
arXiv Detail & Related papers (2022-07-17T12:42:24Z) - Backward Reachability Analysis for Neural Feedback Loops [40.989393438716476]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present an algorithm to iteratively find BP set estimates over a given time horizon and demonstrate the ability to reduce conservativeness by up to 88% with low additional computational cost.
arXiv Detail & Related papers (2022-04-14T01:13:14Z) - OVERT: An Algorithm for Safety Verification of Neural Network Control
Policies for Nonlinear Systems [31.3812947670948]
We present OVERT: a sound algorithm for safety verification of neural network control policies.
The central concept of OVERT is to abstract nonlinear functions with a set of optimally tight piecewise linear bounds.
Overt compares favorably to existing methods both in time and in tightness of the reachable set.
arXiv Detail & Related papers (2021-08-03T00:41:27Z) - Certification of Iterative Predictions in Bayesian Neural Networks [79.15007746660211]
We compute lower bounds for the probability that trajectories of the BNN model reach a given set of states while avoiding a set of unsafe states.
We use the lower bounds in the context of control and reinforcement learning to provide safety certification for given control policies.
arXiv Detail & Related papers (2021-05-21T05:23:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.