Backward Reachability Analysis for Neural Feedback Loops
- URL: http://arxiv.org/abs/2204.08319v1
- Date: Thu, 14 Apr 2022 01:13:14 GMT
- Title: Backward Reachability Analysis for Neural Feedback Loops
- Authors: Nicholas Rober, Michael Everett, Jonathan P. How
- Abstract summary: This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present an algorithm to iteratively find BP set estimates over a given time horizon and demonstrate the ability to reduce conservativeness by up to 88% with low additional computational cost.
- Score: 40.989393438716476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing prevalence of neural networks (NNs) in safety-critical
applications calls for methods to certify their behavior and guarantee safety.
This paper presents a backward reachability approach for safety verification of
neural feedback loops (NFLs), i.e., closed-loop systems with NN control
policies. While recent works have focused on forward reachability as a strategy
for safety certification of NFLs, backward reachability offers advantages over
the forward strategy, particularly in obstacle avoidance scenarios. Prior works
have developed techniques for backward reachability analysis for systems
without NNs, but the presence of NNs in the feedback loop presents a unique set
of problems due to the nonlinearities in their activation functions and because
NN models are generally not invertible. To overcome these challenges, we use
existing forward NN analysis tools to find affine bounds on the control inputs
and solve a series of linear programs (LPs) to efficiently find an
approximation of the backprojection (BP) set, i.e., the set of states for which
the NN control policy will drive the system to a given target set. We present
an algorithm to iteratively find BP set estimates over a given time horizon and
demonstrate the ability to reduce conservativeness in the BP set estimates by
up to 88% with low additional computational cost. We use numerical results from
a double integrator model to verify the efficacy of these algorithms and
demonstrate the ability to certify safety for a linearized ground robot model
in a collision avoidance scenario where forward reachability fails.
Related papers
- Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Probabilistic Reach-Avoid for Bayesian Neural Networks [71.67052234622781]
We show that an optimal synthesis algorithm can provide more than a four-fold increase in the number of certifiable states.
The algorithm is able to provide more than a three-fold increase in the average guaranteed reach-avoid probability.
arXiv Detail & Related papers (2023-10-03T10:52:21Z) - DRIP: Domain Refinement Iteration with Polytopes for Backward
Reachability Analysis of Neural Feedback Loops [12.706980346861986]
This work investigates backward reachability as a framework for providing collision avoidance guarantees for systems controlled by neural network (NN) policies.
Because NNs are typically not invertible, existing methods assume a domain over which to relax the NN, which causes loose over-approximations of the set of states.
We introduce DRIP, an algorithm with a refinement loop on the relaxation domain, which substantially tightens the BP set bounds.
arXiv Detail & Related papers (2022-12-09T03:06:58Z) - Safety Verification for Neural Networks Based on Set-boundary Analysis [5.487915758677295]
Neural networks (NNs) are increasingly applied in safety-critical systems such as autonomous vehicles.
We propose a set-boundary reachability method to investigate the safety verification problem of NNs from a topological perspective.
arXiv Detail & Related papers (2022-10-09T05:55:37Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Certification of Iterative Predictions in Bayesian Neural Networks [79.15007746660211]
We compute lower bounds for the probability that trajectories of the BNN model reach a given set of states while avoiding a set of unsafe states.
We use the lower bounds in the context of control and reinforcement learning to provide safety certification for given control policies.
arXiv Detail & Related papers (2021-05-21T05:23:57Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Robustness Analysis of Neural Networks via Efficient Partitioning with
Applications in Control Systems [45.35808135708894]
Neural networks (NNs) are now routinely implemented on systems that must operate in uncertain environments.
This paper unifies propagation and partition approaches to provide a family of robustness analysis algorithms.
New partitioning techniques are aware of their current bound estimates and desired boundary shape.
arXiv Detail & Related papers (2020-10-01T16:51:36Z) - Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural
Network Controllers via Semidefinite Programming [19.51345816555571]
We propose a novel forward reachability analysis method for the safety verification of linear time-varying systems with neural networks in feedback.
We show that we can compute these approximate reachable sets using semidefinite programming.
We illustrate our method in a quadrotor example, in which we first approximate a nonlinear model predictive controller via a deep neural network and then apply our analysis tool to certify finite-time reachability and constraint satisfaction of the closed-loop system.
arXiv Detail & Related papers (2020-04-16T18:48:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.