Safe Control Under Input Limits with Neural Control Barrier Functions
- URL: http://arxiv.org/abs/2211.11056v1
- Date: Sun, 20 Nov 2022 19:01:37 GMT
- Title: Safe Control Under Input Limits with Neural Control Barrier Functions
- Authors: Simin Liu, Changliu Liu, and John Dolan
- Abstract summary: We propose new methods to synthesize control barrier function (CBF)-based safe controllers that avoid input saturation.
We leverage techniques from machine learning, like neural networks and deep learning, to simplify this challenging problem in nonlinear control design.
We provide empirical results on a 10D state, 4D input quadcopter-pendulum system.
- Score: 3.5270468102327004
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose new methods to synthesize control barrier function (CBF)-based
safe controllers that avoid input saturation, which can cause safety
violations. In particular, our method is created for high-dimensional, general
nonlinear systems, for which such tools are scarce. We leverage techniques from
machine learning, like neural networks and deep learning, to simplify this
challenging problem in nonlinear control design. The method consists of a
learner-critic architecture, in which the critic gives counterexamples of input
saturation and the learner optimizes a neural CBF to eliminate those
counterexamples. We provide empirical results on a 10D state, 4D input
quadcopter-pendulum system. Our learned CBF avoids input saturation and
maintains safety over nearly 100% of trials.
Related papers
- Fault Tolerant Neural Control Barrier Functions for Robotic Systems
under Sensor Faults and Attacks [6.314000948709254]
We study safety-critical control synthesis for robotic systems under sensor faults and attacks.
Our main contribution is the development and synthesis of a new class of CBFs that we term fault tolerant neural control barrier function (FT-NCBF)
arXiv Detail & Related papers (2024-02-28T19:44:19Z) - Exact Verification of ReLU Neural Control Barrier Functions [25.44521208451216]
Control Barrier Functions (CBFs) are a popular approach for safe control of nonlinear systems.
Recent machine learning methods that represent CBFs as neural networks have shown great promise.
This paper presents novel exact conditions and algorithms for verifying safety of feedforward NCBFs with ReLU activation functions.
arXiv Detail & Related papers (2023-10-13T18:59:04Z) - Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - In-Distribution Barrier Functions: Self-Supervised Policy Filters that
Avoid Out-of-Distribution States [84.24300005271185]
We propose a control filter that wraps any reference policy and effectively encourages the system to stay in-distribution with respect to offline-collected safe demonstrations.
Our method is effective for two different visuomotor control tasks in simulation environments, including both top-down and egocentric view settings.
arXiv Detail & Related papers (2023-01-27T22:28:19Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z) - Control Barrier Functions for Unknown Nonlinear Systems using Gaussian
Processes [17.870440210358847]
This paper focuses on the controller synthesis for unknown, nonlinear systems while ensuring safety constraints.
In the learning step, we use a data-driven approach to learn the unknown control affine nonlinear dynamics together with a statistical bound on the accuracy of the learned model.
In the second controller synthesis steps, we develop a systematic approach to compute control barrier functions that explicitly take into consideration the uncertainty of the learned model.
arXiv Detail & Related papers (2020-10-12T16:12:52Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z) - Training Neural Network Controllers Using Control Barrier Functions in
the Presence of Disturbances [9.21721532941863]
We propose to use imitation learning to learn Neural Network-based feedback controllers.
We also develop a new class of High Order CBF for systems under external disturbances.
arXiv Detail & Related papers (2020-01-18T18:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.