Training Neural Network Controllers Using Control Barrier Functions in
the Presence of Disturbances
- URL: http://arxiv.org/abs/2001.08088v1
- Date: Sat, 18 Jan 2020 18:43:10 GMT
- Title: Training Neural Network Controllers Using Control Barrier Functions in
the Presence of Disturbances
- Authors: Shakiba Yaghoubi, Georgios Fainekos, Sriram Sankaranarayanan
- Abstract summary: We propose to use imitation learning to learn Neural Network-based feedback controllers.
We also develop a new class of High Order CBF for systems under external disturbances.
- Score: 9.21721532941863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Control Barrier Functions (CBF) have been recently utilized in the design of
provably safe feedback control laws for nonlinear systems. These feedback
control methods typically compute the next control input by solving an online
Quadratic Program (QP). Solving QP in real-time can be a computationally
expensive process for resource constraint systems. In this work, we propose to
use imitation learning to learn Neural Network-based feedback controllers which
will satisfy the CBF constraints. In the process, we also develop a new class
of High Order CBF for systems under external disturbances. We demonstrate the
framework on a unicycle model subject to external disturbances, e.g., wind or
currents.
Related papers
- Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Learning Robust and Correct Controllers from Signal Temporal Logic
Specifications Using BarrierNet [5.809331819510702]
We exploit STL quantitative semantics to define a notion of robust satisfaction.
We construct a set of trainable High Order Control Barrier Functions (HOCBFs) enforcing the satisfaction of formulas in a fragment of STL.
We train the HOCBFs together with other neural network parameters to further improve the robustness of the controller.
arXiv Detail & Related papers (2023-04-12T21:12:15Z) - Learning Feasibility Constraints for Control Barrier Functions [8.264868845642843]
We employ machine learning techniques to ensure the feasibility of Quadratic Programs (QPs)
We propose a sampling-based learning approach to learn a new feasibility constraint for CBFs.
We demonstrate the advantages of the proposed learning approach to constrained optimal control problems.
arXiv Detail & Related papers (2023-03-10T16:29:20Z) - Improving the Performance of Robust Control through Event-Triggered
Learning [74.57758188038375]
We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem.
We demonstrate improved performance over a robust controller baseline in a numerical example.
arXiv Detail & Related papers (2022-07-28T17:36:37Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural
Network Controllers via Semidefinite Programming [19.51345816555571]
We propose a novel forward reachability analysis method for the safety verification of linear time-varying systems with neural networks in feedback.
We show that we can compute these approximate reachable sets using semidefinite programming.
We illustrate our method in a quadrotor example, in which we first approximate a nonlinear model predictive controller via a deep neural network and then apply our analysis tool to certify finite-time reachability and constraint satisfaction of the closed-loop system.
arXiv Detail & Related papers (2020-04-16T18:48:25Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.