Control Barrier Functions for Unknown Nonlinear Systems using Gaussian
Processes
- URL: http://arxiv.org/abs/2010.05818v1
- Date: Mon, 12 Oct 2020 16:12:52 GMT
- Title: Control Barrier Functions for Unknown Nonlinear Systems using Gaussian
Processes
- Authors: Pushpak Jagtap, George J. Pappas, Majid Zamani
- Abstract summary: This paper focuses on the controller synthesis for unknown, nonlinear systems while ensuring safety constraints.
In the learning step, we use a data-driven approach to learn the unknown control affine nonlinear dynamics together with a statistical bound on the accuracy of the learned model.
In the second controller synthesis steps, we develop a systematic approach to compute control barrier functions that explicitly take into consideration the uncertainty of the learned model.
- Score: 17.870440210358847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on the controller synthesis for unknown, nonlinear systems
while ensuring safety constraints. Our approach consists of two steps, a
learning step that uses Gaussian processes and a controller synthesis step that
is based on control barrier functions. In the learning step, we use a
data-driven approach utilizing Gaussian processes to learn the unknown control
affine nonlinear dynamics together with a statistical bound on the accuracy of
the learned model. In the second controller synthesis steps, we develop a
systematic approach to compute control barrier functions that explicitly take
into consideration the uncertainty of the learned model. The control barrier
function not only results in a safe controller by construction but also
provides a rigorous lower bound on the probability of satisfaction of the
safety specification. Finally, we illustrate the effectiveness of the proposed
results by synthesizing a safety controller for a jet engine example.
Related papers
- Learning to Boost the Performance of Stable Nonlinear Systems [0.0]
We tackle the performance-boosting problem with closed-loop stability guarantees.
Our methods enable learning over arbitrarily deep neural network classes of performance-boosting controllers for stable nonlinear systems.
arXiv Detail & Related papers (2024-05-01T21:11:29Z) - Real-Time Adaptive Safety-Critical Control with Gaussian Processes in
High-Order Uncertain Models [14.790031018404942]
This paper presents an adaptive online learning framework for systems with uncertain parameters.
We first integrate a forgetting factor to refine a variational sparse GP algorithm.
In the second phase, we propose a safety filter based on high-order control barrier functions.
arXiv Detail & Related papers (2024-02-29T08:25:32Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Sample-efficient Safe Learning for Online Nonlinear Control with Control
Barrier Functions [35.9713619595494]
Reinforcement Learning and continuous nonlinear control have been successfully deployed in multiple domains of complicated sequential decision-making tasks.
Given the exploration nature of the learning process and the presence of model uncertainty, it is challenging to apply them to safety-critical control tasks.
We propose a emphprovably efficient episodic safe learning framework for online control tasks.
arXiv Detail & Related papers (2022-07-29T00:54:35Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Verification of safety critical control policies using kernel methods [0.0]
We propose a framework for modeling the error of the value function inherent in Hamilton-Jacobi reachability using a Gaussian process.
The derived safety controller can be used in conjuncture with arbitrary controllers to provide a safe hybrid control law.
arXiv Detail & Related papers (2022-03-23T13:33:02Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Learning Hybrid Control Barrier Functions from Data [66.37785052099423]
Motivated by the lack of systematic tools to obtain safe control laws for hybrid systems, we propose an optimization-based framework for learning certifiably safe control laws from data.
In particular, we assume a setting in which the system dynamics are known and in which data exhibiting safe system behavior is available.
arXiv Detail & Related papers (2020-11-08T23:55:02Z) - Chance-Constrained Trajectory Optimization for Safe Exploration and
Learning of Nonlinear Systems [81.7983463275447]
Learning-based control algorithms require data collection with abundant supervision for training.
We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained optimal control with dynamics learning and feedback control.
arXiv Detail & Related papers (2020-05-09T05:57:43Z) - Learning Constrained Adaptive Differentiable Predictive Control Policies
With Guarantees [1.1086440815804224]
We present differentiable predictive control (DPC), a method for learning constrained neural control policies for linear systems.
We employ automatic differentiation to obtain direct policy gradients by backpropagating the model predictive control (MPC) loss function and constraints penalties through a differentiable closed-loop system dynamics model.
arXiv Detail & Related papers (2020-04-23T14:24:44Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.