Learning Safety Filters for Unknown Discrete-Time Linear Systems
- URL: http://arxiv.org/abs/2111.00631v2
- Date: Mon, 8 May 2023 04:51:22 GMT
- Title: Learning Safety Filters for Unknown Discrete-Time Linear Systems
- Authors: Farhad Farokhi, Alex S. Leong, Mohammad Zamani, Iman Shames
- Abstract summary: Safety is characterized using polytopic constraints on the states and control inputs.
The empirically learned model and process noise covariance with their confidence bounds are used to construct a robust optimization problem for minimally modifying nominal control actions to ensure safety with high probability.
- Score: 11.533793543850384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A learning-based safety filter is developed for discrete-time linear
time-invariant systems with unknown models subject to Gaussian noises with
unknown covariance. Safety is characterized using polytopic constraints on the
states and control inputs. The empirically learned model and process noise
covariance with their confidence bounds are used to construct a robust
optimization problem for minimally modifying nominal control actions to ensure
safety with high probability. The optimization problem relies on tightening the
original safety constraints. The magnitude of the tightening is larger at the
beginning since there is little information to construct reliable models, but
shrinks with time as more data becomes available.
Related papers
- Safe and Stable Closed-Loop Learning for Neural-Network-Supported Model Predictive Control [0.0]
We consider safe learning of parametrized predictive controllers that operate with incomplete information about the underlying process.
Our method focuses on the system's overall long-term performance in closed-loop while keeping it safe and stable.
We explicitly incorporated stability information in the Bayesian-optimization-based learning procedure, thereby achieving rigorous probabilistic safety guarantees.
arXiv Detail & Related papers (2024-09-16T11:03:58Z) - Adaptive Robust Model Predictive Control via Uncertainty Cancellation [25.736296938185074]
We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics.
We optimize over a class of nonlinear feedback policies inspired by certainty equivalent "estimate-and-cancel" control laws.
arXiv Detail & Related papers (2022-12-02T18:54:23Z) - Probabilities Are Not Enough: Formal Controller Synthesis for Stochastic
Dynamical Models with Epistemic Uncertainty [68.00748155945047]
Capturing uncertainty in models of complex dynamical systems is crucial to designing safe controllers.
Several approaches use formal abstractions to synthesize policies that satisfy temporal specifications related to safety and reachability.
Our contribution is a novel abstraction-based controller method for continuous-state models with noise, uncertain parameters, and external disturbances.
arXiv Detail & Related papers (2022-10-12T07:57:03Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Reinforcement Learning Policies in Continuous-Time Linear Systems [0.0]
We present online policies that learn optimal actions fast by carefully randomizing the parameter estimates.
We prove sharp stability results for inexact system dynamics and tightly specify the infinitesimal regret caused by sub-optimal actions.
Our analysis sheds light on fundamental challenges in continuous-time reinforcement learning and suggests a useful cornerstone for similar problems.
arXiv Detail & Related papers (2021-09-16T00:08:50Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Safe Learning of Uncertain Environments for Nonlinear Control-Affine
Systems [10.918870296899245]
We consider the problem of safe learning in nonlinear control-affine systems subject to unknown additive uncertainty.
We model uncertainty as a Gaussian signal and use state measurements to learn its mean and covariance bounds.
We show that with an arbitrarily large probability we can guarantee that the state will remain in the safe set, while learning and control are carried out simultaneously.
arXiv Detail & Related papers (2021-03-02T01:58:02Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Chance-Constrained Trajectory Optimization for Safe Exploration and
Learning of Nonlinear Systems [81.7983463275447]
Learning-based control algorithms require data collection with abundant supervision for training.
We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained optimal control with dynamics learning and feedback control.
arXiv Detail & Related papers (2020-05-09T05:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.