Learning Hybrid Control Barrier Functions from Data
- URL: http://arxiv.org/abs/2011.04112v1
- Date: Sun, 8 Nov 2020 23:55:02 GMT
- Title: Learning Hybrid Control Barrier Functions from Data
- Authors: Lars Lindemann, Haimin Hu, Alexander Robey, Hanwen Zhang, Dimos V.
Dimarogonas, Stephen Tu, and Nikolai Matni
- Abstract summary: Motivated by the lack of systematic tools to obtain safe control laws for hybrid systems, we propose an optimization-based framework for learning certifiably safe control laws from data.
In particular, we assume a setting in which the system dynamics are known and in which data exhibiting safe system behavior is available.
- Score: 66.37785052099423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by the lack of systematic tools to obtain safe control laws for
hybrid systems, we propose an optimization-based framework for learning
certifiably safe control laws from data. In particular, we assume a setting in
which the system dynamics are known and in which data exhibiting safe system
behavior is available. We propose hybrid control barrier functions for hybrid
systems as a means to synthesize safe control inputs. Based on this notion, we
present an optimization-based framework to learn such hybrid control barrier
functions from data. Importantly, we identify sufficient conditions on the data
such that feasibility of the optimization problem ensures correctness of the
learned hybrid control barrier functions, and hence the safety of the system.
We illustrate our findings in two simulations studies, including a compass gait
walker.
Related papers
- Data-Driven Permissible Safe Control with Barrier Certificates [11.96747040086603]
This paper introduces a method of identifying a maximal set of safe strategies from data for systems with unknown dynamics.
Case studies show that increasing the size of the dataset for learning the system grows the permissible strategy set.
arXiv Detail & Related papers (2024-04-30T18:32:24Z) - Learning Local Control Barrier Functions for Safety Control of Hybrid
Systems [11.57209279619218]
Safety is a primary concern for hybrid robotic systems.
Existing safety-critical control approaches for hybrid systems are either computationally inefficient, detrimental to system performance, or limited to small-scale systems.
We propose a learningenabled approach to construct local Control Barrier Functions (CBFs) to guarantee the safety of a wide class of nonlinear hybrid dynamical systems.
arXiv Detail & Related papers (2024-01-26T14:38:43Z) - In-Distribution Barrier Functions: Self-Supervised Policy Filters that
Avoid Out-of-Distribution States [84.24300005271185]
We propose a control filter that wraps any reference policy and effectively encourages the system to stay in-distribution with respect to offline-collected safe demonstrations.
Our method is effective for two different visuomotor control tasks in simulation environments, including both top-down and egocentric view settings.
arXiv Detail & Related papers (2023-01-27T22:28:19Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Learning Robust Hybrid Control Barrier Functions for Uncertain Systems [68.30783663518821]
We propose robust hybrid control barrier functions as a means to synthesize control laws that ensure robust safety.
Based on this notion, we formulate an optimization problem for learning robust hybrid control barrier functions from data.
Our techniques allow us to safely expand the region of attraction of a compass gait walker that is subject to model uncertainty.
arXiv Detail & Related papers (2021-01-16T17:53:35Z) - Control Barrier Functions for Unknown Nonlinear Systems using Gaussian
Processes [17.870440210358847]
This paper focuses on the controller synthesis for unknown, nonlinear systems while ensuring safety constraints.
In the learning step, we use a data-driven approach to learn the unknown control affine nonlinear dynamics together with a statistical bound on the accuracy of the learned model.
In the second controller synthesis steps, we develop a systematic approach to compute control barrier functions that explicitly take into consideration the uncertainty of the learned model.
arXiv Detail & Related papers (2020-10-12T16:12:52Z) - Chance-Constrained Trajectory Optimization for Safe Exploration and
Learning of Nonlinear Systems [81.7983463275447]
Learning-based control algorithms require data collection with abundant supervision for training.
We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained optimal control with dynamics learning and feedback control.
arXiv Detail & Related papers (2020-05-09T05:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.