Learning Robust Hybrid Control Barrier Functions for Uncertain Systems
- URL: http://arxiv.org/abs/2101.06492v1
- Date: Sat, 16 Jan 2021 17:53:35 GMT
- Title: Learning Robust Hybrid Control Barrier Functions for Uncertain Systems
- Authors: Alexander Robey, Lars Lindemann, Stephen Tu, and Nikolai Matni
- Abstract summary: We propose robust hybrid control barrier functions as a means to synthesize control laws that ensure robust safety.
Based on this notion, we formulate an optimization problem for learning robust hybrid control barrier functions from data.
Our techniques allow us to safely expand the region of attraction of a compass gait walker that is subject to model uncertainty.
- Score: 68.30783663518821
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The need for robust control laws is especially important in safety-critical
applications. We propose robust hybrid control barrier functions as a means to
synthesize control laws that ensure robust safety. Based on this notion, we
formulate an optimization problem for learning robust hybrid control barrier
functions from data. We identify sufficient conditions on the data such that
feasibility of the optimization problem ensures correctness of the learned
robust hybrid control barrier functions. Our techniques allow us to safely
expand the region of attraction of a compass gait walker that is subject to
model uncertainty.
Related papers
- A Physics-Informed Machine Learning Framework for Safe and Optimal Control of Autonomous Systems [8.347548017994178]
Safety and performance could be competing objectives, which makes their co-optimization difficult.
We propose a state-constrained optimal control problem, where performance objectives are encoded via a cost function and safety requirements are imposed as state constraints.
We demonstrate that the resultant value function satisfies a Hamilton-Jacobi-Bellman equation, which we approximate efficiently using a novel machine learning framework.
arXiv Detail & Related papers (2025-02-16T09:46:17Z) - Learning-Enhanced Safeguard Control for High-Relative-Degree Systems: Robust Optimization under Disturbances and Faults [6.535600892275023]
This paper aims to enhance system performance with safety guarantee in reinforcement learning-based optimal control problems.
The concept of gradient similarity is proposed to quantify the relationship between the gradient of safety and the gradient of performance.
gradient manipulation and adaptive mechanisms are introduced in the safe RL framework to enhance the performance with a safety guarantee.
arXiv Detail & Related papers (2025-01-26T03:03:02Z) - Reinforcement Learning-based Receding Horizon Control using Adaptive Control Barrier Functions for Safety-Critical Systems [14.166970599802324]
Optimal control methods provide solutions to safety-critical problems but easily become intractable.
We propose a Reinforcement Learning-based Receding Horizon Control approach leveraging Model Predictive Control.
We validate our method by applying it to the challenging automated merging control problem for Connected and Automated Vehicles.
arXiv Detail & Related papers (2024-03-26T02:49:08Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement
Learning in Unknown Stochastic Environments [84.3830478851369]
We propose a safe reinforcement learning approach that can jointly learn the environment and optimize the control policy.
Our approach can effectively enforce hard safety constraints and significantly outperform CMDP-based baseline methods in system safe rate measured via simulations.
arXiv Detail & Related papers (2022-09-29T20:49:25Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Learning Hybrid Control Barrier Functions from Data [66.37785052099423]
Motivated by the lack of systematic tools to obtain safe control laws for hybrid systems, we propose an optimization-based framework for learning certifiably safe control laws from data.
In particular, we assume a setting in which the system dynamics are known and in which data exhibiting safe system behavior is available.
arXiv Detail & Related papers (2020-11-08T23:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.