Guaranteeing Safety of Learned Perception Modules via Measurement-Robust
Control Barrier Functions
- URL: http://arxiv.org/abs/2010.16001v1
- Date: Fri, 30 Oct 2020 00:19:01 GMT
- Title: Guaranteeing Safety of Learned Perception Modules via Measurement-Robust
Control Barrier Functions
- Authors: Sarah Dean, Andrew J. Taylor, Ryan K. Cosner, Benjamin Recht, Aaron D.
Ames
- Abstract summary: We seek to unify techniques from control theory and machine learning to synthesize controllers that achieve safety.
We define a Measurement-Robust Control Barrier Function (MR-CBF) as a tool for determining safe control inputs.
We demonstrate the efficacy of MR-CBFs in achieving safety with measurement model uncertainty on a simulated Segway system.
- Score: 43.4346415363429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern nonlinear control theory seeks to develop feedback controllers that
endow systems with properties such as safety and stability. The guarantees
ensured by these controllers often rely on accurate estimates of the system
state for determining control actions. In practice, measurement model
uncertainty can lead to error in state estimates that degrades these
guarantees. In this paper, we seek to unify techniques from control theory and
machine learning to synthesize controllers that achieve safety in the presence
of measurement model uncertainty. We define the notion of a Measurement-Robust
Control Barrier Function (MR-CBF) as a tool for determining safe control inputs
when facing measurement model uncertainty. Furthermore, MR-CBFs are used to
inform sampling methodologies for learning-based perception systems and
quantify tolerable error in the resulting learned models. We demonstrate the
efficacy of MR-CBFs in achieving safety with measurement model uncertainty on a
simulated Segway system.
Related papers
- Learning-Based Shielding for Safe Autonomy under Unknown Dynamics [9.786577115501602]
Shielding is a method used to guarantee the safety of a system under a black-box controller.
This paper proposes a data-driven shielding methodology that guarantees safety for unknown systems.
arXiv Detail & Related papers (2024-10-07T16:10:15Z) - Statistical Safety and Robustness Guarantees for Feedback Motion
Planning of Unknown Underactuated Stochastic Systems [1.0323063834827415]
We propose a sampling-based planner that uses the mean dynamics model and simultaneously bounds the closed-loop tracking error via a learned disturbance bound.
We validate that our guarantees translate to empirical safety in simulation on a 10D quadrotor, and in the real world on a physical CrazyFlie quadrotor and Clearpath Jackal robot.
arXiv Detail & Related papers (2022-12-13T19:38:39Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Verification of safety critical control policies using kernel methods [0.0]
We propose a framework for modeling the error of the value function inherent in Hamilton-Jacobi reachability using a Gaussian process.
The derived safety controller can be used in conjuncture with arbitrary controllers to provide a safe hybrid control law.
arXiv Detail & Related papers (2022-03-23T13:33:02Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.