Correct-by-Construction Control for Stochastic and Uncertain Dynamical
Models via Formal Abstractions
- URL: http://arxiv.org/abs/2311.09786v1
- Date: Thu, 16 Nov 2023 11:03:54 GMT
- Title: Correct-by-Construction Control for Stochastic and Uncertain Dynamical
Models via Formal Abstractions
- Authors: Thom Badings (Radboud University), Nils Jansen (Radboud University),
Licio Romao (University of Oxford), Alessandro Abate (University of Oxford)
- Abstract summary: We develop an abstraction framework that can be used to solve this problem under various modeling assumptions.
We use state-of-the-art verification techniques to compute an optimal policy on the iMDP with guarantees for satisfying the given specification.
We then show that, by construction, we can refine this policy into a feedback controller for which these guarantees carry over to the dynamical model.
- Score: 44.99833362998488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated synthesis of correct-by-construction controllers for autonomous
systems is crucial for their deployment in safety-critical scenarios. Such
autonomous systems are naturally modeled as stochastic dynamical models. The
general problem is to compute a controller that provably satisfies a given
task, represented as a probabilistic temporal logic specification. However,
factors such as stochastic uncertainty, imprecisely known parameters, and
hybrid features make this problem challenging. We have developed an abstraction
framework that can be used to solve this problem under various modeling
assumptions. Our approach is based on a robust finite-state abstraction of the
stochastic dynamical model in the form of a Markov decision process with
intervals of probabilities (iMDP). We use state-of-the-art verification
techniques to compute an optimal policy on the iMDP with guarantees for
satisfying the given specification. We then show that, by construction, we can
refine this policy into a feedback controller for which these guarantees carry
over to the dynamical model. In this short paper, we survey our recent research
in this area and highlight two challenges (related to scalability and dealing
with nonlinear dynamics) that we aim to address with our ongoing research.
Related papers
- Robust Control for Dynamical Systems With Non-Gaussian Noise via Formal
Abstractions [59.605246463200736]
We present a novel controller synthesis method that does not rely on any explicit representation of the noise distributions.
First, we abstract the continuous control system into a finite-state model that captures noise by probabilistic transitions between discrete states.
We use state-of-the-art verification techniques to provide guarantees on the interval Markov decision process and compute a controller for which these guarantees carry over to the original control system.
arXiv Detail & Related papers (2023-01-04T10:40:30Z) - Formal Controller Synthesis for Markov Jump Linear Systems with
Uncertain Dynamics [64.72260320446158]
We propose a method for synthesising controllers for Markov jump linear systems.
Our method is based on a finite-state abstraction that captures both the discrete (mode-jumping) and continuous (stochastic linear) behaviour of the MJLS.
We apply our method to multiple realistic benchmark problems, in particular, a temperature control and an aerial vehicle delivery problem.
arXiv Detail & Related papers (2022-12-01T17:36:30Z) - Reduced order modeling of parametrized systems through autoencoders and
SINDy approach: continuation of periodic solutions [0.0]
This work presents a data-driven, non-intrusive framework which combines ROM construction with reduced dynamics identification.
The proposed approach leverages autoencoder neural networks with parametric sparse identification of nonlinear dynamics (SINDy) to construct a low-dimensional dynamical model.
These aim at tracking the evolution of periodic steady-state responses as functions of system parameters, avoiding the computation of the transient phase, and allowing to detect instabilities and bifurcations.
arXiv Detail & Related papers (2022-11-13T01:57:18Z) - Probabilities Are Not Enough: Formal Controller Synthesis for Stochastic
Dynamical Models with Epistemic Uncertainty [68.00748155945047]
Capturing uncertainty in models of complex dynamical systems is crucial to designing safe controllers.
Several approaches use formal abstractions to synthesize policies that satisfy temporal specifications related to safety and reachability.
Our contribution is a novel abstraction-based controller method for continuous-state models with noise, uncertain parameters, and external disturbances.
arXiv Detail & Related papers (2022-10-12T07:57:03Z) - Sample Complexity of Robust Reinforcement Learning with a Generative
Model [0.0]
We propose a model-based reinforcement learning (RL) algorithm for learning an $epsilon$-optimal robust policy.
We consider three different forms of uncertainty sets, characterized by the total variation distance, chi-square divergence, and KL divergence.
In addition to the sample complexity results, we also present a formal analytical argument on the benefit of using robust policies.
arXiv Detail & Related papers (2021-12-02T18:55:51Z) - Adversarial Robustness Verification and Attack Synthesis in Stochastic
Systems [8.833548357664606]
We develop a formal framework for adversarial robustness in systems defined as discrete time Markov chains (DTMCs)
We outline a class of threat models under which adversaries can perturb system transitions, constrained by an $varepsilon$ ball around the original transition probabilities.
arXiv Detail & Related papers (2021-10-05T15:52:47Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.