Safe Control with Neural Network Dynamic Models
- URL: http://arxiv.org/abs/2110.01110v1
- Date: Sun, 3 Oct 2021 22:13:49 GMT
- Title: Safe Control with Neural Network Dynamic Models
- Authors: Tianhao Wei and Changliu Liu
- Abstract summary: We propose MIND-SIS, the first method to derive safe control laws for Neural Network Dynamic Models (NNDM)
MIND-SIS guarantees forward invariance and finite convergence.
It has been numerically validated that MIND-SIS achieves safe and optimal control of NNDM.
- Score: 2.512827436728378
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safety is critical in autonomous robotic systems. A safe control law ensures
forward invariance of a safe set (a subset in the state space). It has been
extensively studied regarding how to derive a safe control law with a
control-affine analytical dynamic model. However, in complex environments and
tasks, it is challenging and time-consuming to obtain a principled analytical
model of the system. In these situations, data-driven learning is extensively
used and the learned models are encoded in neural networks. How to formally
derive a safe control law with Neural Network Dynamic Models (NNDM) remains
unclear due to the lack of computationally tractable methods to deal with these
black-box functions. In fact, even finding the control that minimizes an
objective for NNDM without any safety constraint is still challenging. In this
work, we propose MIND-SIS (Mixed Integer for Neural network Dynamic model with
Safety Index Synthesis), the first method to derive safe control laws for NNDM.
The method includes two parts: 1) SIS: an algorithm for the offline synthesis
of the safety index (also called as barrier function), which uses evolutionary
methods and 2) MIND: an algorithm for online computation of the optimal and
safe control signal, which solves a constrained optimization using a
computationally efficient encoding of neural networks. It has been
theoretically proved that MIND-SIS guarantees forward invariance and finite
convergence. And it has been numerically validated that MIND-SIS achieves safe
and optimal control of NNDM. From our experiments, the optimality gap is less
than $10^{-8}$, and the safety constraint violation is $0$.
Related papers
- Convex neural network synthesis for robustness in the 1-norm [0.0]
This paper proposes a method to generate an approximation of a neural network which is certifiably more robust.
An application to robustifying model predictive control is used to demonstrate the results.
arXiv Detail & Related papers (2024-05-29T12:17:09Z) - Real-Time Safe Control of Neural Network Dynamic Models with Sound Approximation [11.622680091231393]
We propose to use a sound approximation of the neural network dynamic models (NNDM) in the control synthesis.
We mitigate the errors introduced by the approximation and to ensure persistent feasibility of the safe control problems.
Experiments with different neural dynamics and safety constraints show that with safety guaranteed, our NNDMs with sound approximation are 10-100 times faster than the safe control baseline.
arXiv Detail & Related papers (2024-04-20T19:51:29Z) - System-level Safety Guard: Safe Tracking Control through Uncertain Neural Network Dynamics Models [8.16100000885664]
The Neural Network (NN) has been considered in many control and robotics applications.
In this paper, we leverage the NNs as predictive models for trajectory tracking of unknown dynamical systems.
The proposed MILP-based approach is empirically demonstrated in robot navigation and obstacle avoidance simulations.
arXiv Detail & Related papers (2023-12-11T19:50:51Z) - Probabilistic Reach-Avoid for Bayesian Neural Networks [71.67052234622781]
We show that an optimal synthesis algorithm can provide more than a four-fold increase in the number of certifiable states.
The algorithm is able to provide more than a three-fold increase in the average guaranteed reach-avoid probability.
arXiv Detail & Related papers (2023-10-03T10:52:21Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Model-Based Safe Policy Search from Signal Temporal Logic Specifications
Using Recurrent Neural Networks [1.005130974691351]
We propose a policy search approach to learn controllers from specifications given as Signal Temporal Logic (STL) formulae.
The system model is unknown, and it is learned together with the control policy.
The results show that our approach can satisfy the given specification within very few system runs, and therefore it has the potential to be used for on-line control.
arXiv Detail & Related papers (2021-03-29T20:21:55Z) - Generating Probabilistic Safety Guarantees for Neural Network
Controllers [30.34898838361206]
We use a dynamics model to determine the output properties that must hold for a neural network controller to operate safely.
We develop an adaptive verification approach to efficiently generate an overapproximation of the neural network policy.
We show that our method is able to generate meaningful probabilistic safety guarantees for aircraft collision avoidance neural networks.
arXiv Detail & Related papers (2021-03-01T18:48:21Z) - Chance-Constrained Trajectory Optimization for Safe Exploration and
Learning of Nonlinear Systems [81.7983463275447]
Learning-based control algorithms require data collection with abundant supervision for training.
We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained optimal control with dynamics learning and feedback control.
arXiv Detail & Related papers (2020-05-09T05:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.