Provably Correct Training of Neural Network Controllers Using
Reachability Analysis
- URL: http://arxiv.org/abs/2102.10806v1
- Date: Mon, 22 Feb 2021 07:08:11 GMT
- Title: Provably Correct Training of Neural Network Controllers Using
Reachability Analysis
- Authors: Xiaowu Sun, Yasser Shoukry
- Abstract summary: We consider the problem of training neural network (NN) controllers for cyber-physical systems that are guaranteed to satisfy safety and liveness properties.
Our approach is to combine model-based design methodologies for dynamical systems with data-driven approaches to achieve this target.
- Score: 3.04585143845864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the problem of training neural network (NN)
controllers for cyber-physical systems (CPS) that are guaranteed to satisfy
safety and liveness properties. Our approach is to combine model-based design
methodologies for dynamical systems with data-driven approaches to achieve this
target. Given a mathematical model of the dynamical system, we compute a
finite-state abstract model that captures the closed-loop behavior under all
possible neural network controllers. Using this finite-state abstract model,
our framework identifies the subset of NN weights that are guaranteed to
satisfy the safety requirements. During training, we augment the learning
algorithm with a NN weight projection operator that enforces the resulting NN
to be provably safe. To account for the liveness properties, the proposed
framework uses the finite-state abstract model to identify candidate NN weights
that may satisfy the liveness properties. Using such candidate NN weights, the
proposed framework biases the NN training to achieve the liveness
specification. Achieving the guarantees above, can not be ensured without
correctness guarantees on the NN architecture, which controls the NN's
expressiveness. Therefore, and as a corner step in the proposed framework is
the ability to select provably correct NN architectures automatically.
Related papers
- Lyapunov-stable Neural Control for State and Output Feedback: A Novel Formulation [67.63756749551924]
Learning-based neural network (NN) control policies have shown impressive empirical performance in a wide range of tasks in robotics and control.
Lyapunov stability guarantees over the region-of-attraction (ROA) for NN controllers with nonlinear dynamical systems are challenging to obtain.
We demonstrate a new framework for learning NN controllers together with Lyapunov certificates using fast empirical falsification and strategic regularizations.
arXiv Detail & Related papers (2024-04-11T17:49:15Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - Safety Filter Design for Neural Network Systems via Convex Optimization [35.87465363928146]
We propose a novel safety filter that relies on convex optimization to ensure safety for a neural network (NN) system.
We demonstrate the efficacy of the proposed framework numerically on a nonlinear pendulum system.
arXiv Detail & Related papers (2023-08-16T01:30:13Z) - Scaling Model Checking for DNN Analysis via State-Space Reduction and
Input Segmentation (Extended Version) [12.272381003294026]
Existing frameworks provide robustness and/or safety guarantees for the trained NNs.
We proposed FANNet, the first model checking-based framework for analyzing a broader range of NN properties.
This work develops state-space reduction and input segmentation approaches, to improve the scalability and timing efficiency of formal NN analysis.
arXiv Detail & Related papers (2023-06-29T22:18:07Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Automated Repair of Neural Networks [0.26651200086513094]
We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
arXiv Detail & Related papers (2022-07-17T12:42:24Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Provably Safe Model-Based Meta Reinforcement Learning: An
Abstraction-Based Approach [3.569867801312134]
We consider the problem of training a provably safe Neural Network (NN) controller for uncertain nonlinear dynamical systems.
Our approach is to learn a set of NN controllers during the training phase.
When the task becomes available at runtime, our framework will carefully select a subset of these NN controllers and compose them to form the final NN controller.
arXiv Detail & Related papers (2021-09-03T00:38:05Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.