Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural
Network Controllers via Semidefinite Programming
- URL: http://arxiv.org/abs/2004.07876v1
- Date: Thu, 16 Apr 2020 18:48:25 GMT
- Title: Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural
Network Controllers via Semidefinite Programming
- Authors: Haimin Hu, Mahyar Fazlyab, Manfred Morari, George J. Pappas
- Abstract summary: We propose a novel forward reachability analysis method for the safety verification of linear time-varying systems with neural networks in feedback.
We show that we can compute these approximate reachable sets using semidefinite programming.
We illustrate our method in a quadrotor example, in which we first approximate a nonlinear model predictive controller via a deep neural network and then apply our analysis tool to certify finite-time reachability and constraint satisfaction of the closed-loop system.
- Score: 19.51345816555571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been an increasing interest in using neural networks in closed-loop
control systems to improve performance and reduce computational costs for
on-line implementation. However, providing safety and stability guarantees for
these systems is challenging due to the nonlinear and compositional structure
of neural networks. In this paper, we propose a novel forward reachability
analysis method for the safety verification of linear time-varying systems with
neural networks in feedback interconnection. Our technical approach relies on
abstracting the nonlinear activation functions by quadratic constraints, which
leads to an outer-approximation of forward reachable sets of the closed-loop
system. We show that we can compute these approximate reachable sets using
semidefinite programming. We illustrate our method in a quadrotor example, in
which we first approximate a nonlinear model predictive controller via a deep
neural network and then apply our analysis tool to certify finite-time
reachability and constraint satisfaction of the closed-loop system.
Related papers
- Interval Reachability of Nonlinear Dynamical Systems with Neural Network
Controllers [5.543220407902113]
This paper proposes a computationally efficient framework, based on interval analysis, for rigorous verification of nonlinear continuous-time dynamical systems with neural network controllers.
Inspired by mixed monotone theory, we embed the closed-loop dynamics into a larger system using an inclusion function of the neural network and a decomposition function of the open-loop system.
We show that one can efficiently compute hyper-rectangular over-approximations of the reachable sets using a single trajectory of the embedding system.
arXiv Detail & Related papers (2023-01-19T06:46:36Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Backward Reachability Analysis for Neural Feedback Loops [40.989393438716476]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present an algorithm to iteratively find BP set estimates over a given time horizon and demonstrate the ability to reduce conservativeness by up to 88% with low additional computational cost.
arXiv Detail & Related papers (2022-04-14T01:13:14Z) - Reachability Analysis of Neural Feedback Loops [34.94930611635459]
This work focuses on estimating the forward reachable set of textitneural feedback loops (closed-loop systems with NN controllers)
Recent work provides bounds on these reachable sets, but the computationally tractable approaches yield overly conservative bounds.
This work bridges the gap by formulating a convex optimization problem for the reachability analysis of closed-loop systems with NN controllers.
arXiv Detail & Related papers (2021-08-09T16:11:57Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Efficient Reachability Analysis of Closed-Loop Systems with Neural
Network Controllers [39.27951763459939]
This work focuses on estimating the forward reachable set of closed-loop systems with NN controllers.
Recent work provides bounds on these reachable sets, yet the computationally efficient approaches provide overly conservative bounds.
This work bridges the gap by formulating a convex optimization problem for reachability analysis for closed-loop systems with NN controllers.
arXiv Detail & Related papers (2021-01-05T22:30:39Z) - Certifying Incremental Quadratic Constraints for Neural Networks via
Convex Optimization [2.388501293246858]
We propose a convex program to certify incremental quadratic constraints on the map of neural networks over a region of interest.
certificates can capture several useful properties such as (local) Lipschitz continuity, one-sided Lipschitz continuity, invertibility, and contraction.
arXiv Detail & Related papers (2020-12-10T21:15:00Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - Automatic Perturbation Analysis for Scalable Certified Robustness and
Beyond [171.07853346630057]
Linear relaxation based perturbation analysis (LiRPA) for neural networks has become a core component in robustness verification and certified defense.
We develop an automatic framework to enable perturbation analysis on any neural network structures.
We demonstrate LiRPA based certified defense on Tiny ImageNet and Downscaled ImageNet.
arXiv Detail & Related papers (2020-02-28T18:47:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.