Risk Verification of Stochastic Systems with Neural Network Controllers
- URL: http://arxiv.org/abs/2209.09881v1
- Date: Fri, 26 Aug 2022 20:09:55 GMT
- Title: Risk Verification of Stochastic Systems with Neural Network Controllers
- Authors: Matthew Cleaveland, Lars Lindemann, Radoslav Ivanov, George Pappas
- Abstract summary: We present a data-driven framework for verifying the risk of dynamical systems with neural network (NN) controllers.
Given a control system, an NN controller, and a specification equipped with a notion of trace robustness, we collect trajectories from the system.
We compute risk metrics over these robustness values to estimate the risk that the NN controller will not satisfy the specification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by the fragility of neural network (NN) controllers in
safety-critical applications, we present a data-driven framework for verifying
the risk of stochastic dynamical systems with NN controllers. Given a
stochastic control system, an NN controller, and a specification equipped with
a notion of trace robustness (e.g., constraint functions or signal temporal
logic), we collect trajectories from the system that may or may not satisfy the
specification. In particular, each of the trajectories produces a robustness
value that indicates how well (severely) the specification is satisfied
(violated). We then compute risk metrics over these robustness values to
estimate the risk that the NN controller will not satisfy the specification. We
are further interested in quantifying the difference in risk between two
systems, and we show how the risk estimated from a nominal system can provide
an upper bound the risk of a perturbed version of the system. In particular,
the tightness of this bound depends on the closeness of the systems in terms of
the closeness of their system trajectories. For Lipschitz continuous and
incrementally input-to-state stable systems, we show how to exactly quantify
system closeness with varying degrees of conservatism, while we estimate system
closeness for more general systems from data in our experiments. We demonstrate
our risk verification approach on two case studies, an underwater vehicle and
an F1/10 autonomous car.
Related papers
- Automatic AI controller that can drive with confidence: steering vehicle with uncertainty knowledge [3.131134048419781]
This research focuses on the development of a vehicle's lateral control system using a machine learning framework.
We employ a Bayesian Neural Network (BNN), a probabilistic learning model, to address uncertainty quantification.
By establishing a confidence threshold, we can trigger manual intervention, ensuring that control is relinquished from the algorithm when it operates outside of safe parameters.
arXiv Detail & Related papers (2024-04-24T23:22:37Z) - System-level Safety Guard: Safe Tracking Control through Uncertain Neural Network Dynamics Models [8.16100000885664]
The Neural Network (NN) has been considered in many control and robotics applications.
In this paper, we leverage the NNs as predictive models for trajectory tracking of unknown dynamical systems.
The proposed MILP-based approach is empirically demonstrated in robot navigation and obstacle avoidance simulations.
arXiv Detail & Related papers (2023-12-11T19:50:51Z) - Robust Control for Dynamical Systems With Non-Gaussian Noise via Formal
Abstractions [59.605246463200736]
We present a novel controller synthesis method that does not rely on any explicit representation of the noise distributions.
First, we abstract the continuous control system into a finite-state model that captures noise by probabilistic transitions between discrete states.
We use state-of-the-art verification techniques to provide guarantees on the interval Markov decision process and compute a controller for which these guarantees carry over to the original control system.
arXiv Detail & Related papers (2023-01-04T10:40:30Z) - Probabilities Are Not Enough: Formal Controller Synthesis for Stochastic
Dynamical Models with Epistemic Uncertainty [68.00748155945047]
Capturing uncertainty in models of complex dynamical systems is crucial to designing safe controllers.
Several approaches use formal abstractions to synthesize policies that satisfy temporal specifications related to safety and reachability.
Our contribution is a novel abstraction-based controller method for continuous-state models with noise, uncertain parameters, and external disturbances.
arXiv Detail & Related papers (2022-10-12T07:57:03Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Finite-time System Identification and Adaptive Control in Autoregressive
Exogenous Systems [79.67879934935661]
We study the problem of system identification and adaptive control of unknown ARX systems.
We provide finite-time learning guarantees for the ARX systems under both open-loop and closed-loop data collection.
arXiv Detail & Related papers (2021-08-26T18:00:00Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Formal Verification of Stochastic Systems with ReLU Neural Network
Controllers [22.68044012584378]
We address the problem of formal safety verification for cyber-physical systems equipped with ReLU neural network (NN) controllers.
Our goal is to find the set of initial states from where, with a predetermined confidence, the system will not reach an unsafe configuration.
arXiv Detail & Related papers (2021-03-08T23:53:13Z) - Safety Verification of Neural Network Controlled Systems [0.0]
We propose a system-level approach for verifying the safety of neural network controlled systems.
We assume a generic model for the controller that can capture both simple and complex behaviours.
We perform a reachability analysis that soundly approximates the reachable states of the overall system.
arXiv Detail & Related papers (2020-11-10T15:26:38Z) - Learning Stabilizing Controllers for Unstable Linear Quadratic
Regulators from a Single Trajectory [85.29718245299341]
We study linear controllers under quadratic costs model also known as linear quadratic regulators (LQR)
We present two different semi-definite programs (SDP) which results in a controller that stabilizes all systems within an ellipsoid uncertainty set.
We propose an efficient data dependent algorithm -- textsceXploration -- that with high probability quickly identifies a stabilizing controller.
arXiv Detail & Related papers (2020-06-19T08:58:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.