Safety Filter Design for Neural Network Systems via Convex Optimization
- URL: http://arxiv.org/abs/2308.08086v2
- Date: Mon, 28 Aug 2023 15:40:02 GMT
- Title: Safety Filter Design for Neural Network Systems via Convex Optimization
- Authors: Shaoru Chen, Kong Yao Chee, Nikolai Matni, M. Ani Hsieh, George J.
Pappas
- Abstract summary: We propose a novel safety filter that relies on convex optimization to ensure safety for a neural network (NN) system.
We demonstrate the efficacy of the proposed framework numerically on a nonlinear pendulum system.
- Score: 35.87465363928146
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the increase in data availability, it has been widely demonstrated that
neural networks (NN) can capture complex system dynamics precisely in a
data-driven manner. However, the architectural complexity and nonlinearity of
the NNs make it challenging to synthesize a provably safe controller. In this
work, we propose a novel safety filter that relies on convex optimization to
ensure safety for a NN system, subject to additive disturbances that are
capable of capturing modeling errors. Our approach leverages tools from NN
verification to over-approximate NN dynamics with a set of linear bounds,
followed by an application of robust linear MPC to search for controllers that
can guarantee robust constraint satisfaction. We demonstrate the efficacy of
the proposed framework numerically on a nonlinear pendulum system.
Related papers
- Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Approximate non-linear model predictive control with safety-augmented neural networks [7.670727843779155]
This paper studies approximations of model predictive control (MPC) controllers via neural networks (NNs) to achieve fast online evaluation.
We propose safety augmentation that yields deterministic guarantees for convergence and constraint satisfaction despite approximation inaccuracies.
arXiv Detail & Related papers (2023-04-19T11:27:06Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Neural Network Optimal Feedback Control with Guaranteed Local Stability [2.8725913509167156]
We show that some neural network (NN) controllers with high test accuracy can fail to even locally stabilize the dynamic system.
We propose several novel NN architectures, which we show guarantee local stability while retaining the semi-global approximation capacity to learn the optimal feedback policy.
arXiv Detail & Related papers (2022-05-01T04:23:24Z) - Neural network optimal feedback control with enhanced closed loop
stability [3.0981875303080795]
Recent research has shown that supervised learning can be an effective tool for designing optimal feedback controllers for high-dimensional nonlinear dynamic systems.
But the behavior of these neural network (NN) controllers is still not well understood.
In this paper we use numerical simulations to demonstrate that typical test accuracy metrics do not effectively capture the ability of an NN controller to stabilize a system.
arXiv Detail & Related papers (2021-09-15T17:59:20Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - Provably Correct Training of Neural Network Controllers Using
Reachability Analysis [3.04585143845864]
We consider the problem of training neural network (NN) controllers for cyber-physical systems that are guaranteed to satisfy safety and liveness properties.
Our approach is to combine model-based design methodologies for dynamical systems with data-driven approaches to achieve this target.
arXiv Detail & Related papers (2021-02-22T07:08:11Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.