Geometry of Radial Basis Neural Networks for Safety Biased Approximation
of Unsafe Regions
- URL: http://arxiv.org/abs/2210.05596v2
- Date: Tue, 28 Mar 2023 16:09:49 GMT
- Title: Geometry of Radial Basis Neural Networks for Safety Biased Approximation
of Unsafe Regions
- Authors: Ahmad Abuaish, Mohit Srinivasan, Patricio A. Vela
- Abstract summary: This manuscript describes the specific geometry of the neural network used for zeroing barrier function synthesis.
It shows how the network provides the necessary representation for splitting the state space into safe and unsafe regions.
- Score: 15.933842803733244
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Barrier function-based inequality constraints are a means to enforce safety
specifications for control systems. When used in conjunction with a convex
optimization program, they provide a computationally efficient method to
enforce safety for the general class of control-affine systems. One of the main
assumptions when taking this approach is the a priori knowledge of the barrier
function itself, i.e., knowledge of the safe set. In the context of navigation
through unknown environments where the locally safe set evolves with time, such
knowledge does not exist. This manuscript focuses on the synthesis of a zeroing
barrier function characterizing the safe set based on safe and unsafe sample
measurements, e.g., from perception data in navigation applications. Prior work
formulated a supervised machine learning algorithm whose solution guaranteed
the construction of a zeroing barrier function with specific level-set
properties. However, it did not explore the geometry of the neural network
design used for the synthesis process. This manuscript describes the specific
geometry of the neural network used for zeroing barrier function synthesis, and
shows how the network provides the necessary representation for splitting the
state space into safe and unsafe regions.
Related papers
- Neural Control Barrier Functions from Physics Informed Neural Networks [2.092779643426281]
This paper introduces a novel class of neural CBFs that leverages a physics-inspired neural network framework.
By utilizing reciprocal CBFs instead of zeroing CBFs, the proposed framework allows for the specification of flexible, user-defined safe regions.
arXiv Detail & Related papers (2025-04-15T10:13:30Z) - Exact Verification of ReLU Neural Control Barrier Functions [25.44521208451216]
Control Barrier Functions (CBFs) are a popular approach for safe control of nonlinear systems.
Recent machine learning methods that represent CBFs as neural networks have shown great promise.
This paper presents novel exact conditions and algorithms for verifying safety of feedforward NCBFs with ReLU activation functions.
arXiv Detail & Related papers (2023-10-13T18:59:04Z) - Approximate Shielding of Atari Agents for Safe Exploration [83.55437924143615]
We propose a principled algorithm for safe exploration based on the concept of shielding.
We present preliminary results that show our approximate shielding algorithm effectively reduces the rate of safety violations.
arXiv Detail & Related papers (2023-04-21T16:19:54Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Safety Certification for Stochastic Systems via Neural Barrier Functions [3.7491936479803054]
barrier functions can be used to provide non-trivial certificates of safety for non-linear systems.
We parameterize a barrier function as a neural network and show that robust training of neural networks can be successfully employed to find barrier functions.
We show that our approach outperforms existing methods in several case studies and often returns certificates of safety that are orders of magnitude larger.
arXiv Detail & Related papers (2022-06-03T09:06:02Z) - Gaussian Control Barrier Functions : A Non-Parametric Paradigm to Safety [7.921648699199647]
We propose a non-parametric approach for online synthesis of CBFs using Gaussian Processes (GPs)
GPs have favorable properties, in addition to being non-parametric, such as analytical tractability and robust uncertainty estimation.
We validate our approach experimentally on a quad by demonstrating safe control for fixed but arbitrary safe sets.
arXiv Detail & Related papers (2022-03-29T12:21:28Z) - BarrierNet: A Safety-Guaranteed Layer for Neural Networks [50.86816322277293]
BarrierNet allows the safety constraints of a neural controller be adaptable to changing environments.
We evaluate them on a series of control problems such as traffic merging and robot navigations in 2D and 3D space.
arXiv Detail & Related papers (2021-11-22T15:38:11Z) - Constrained Feedforward Neural Network Training via Reachability
Analysis [0.0]
It remains an open challenge to train a neural network to obey safety constraints.
This work proposes a constrained method to simultaneously train and verify a feedforward neural network with rectified linear unit (ReLU) nonlinearities.
arXiv Detail & Related papers (2021-07-16T04:03:01Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z) - Safety-Aware Hardening of 3D Object Detection Neural Network Systems [0.0]
We study how state-of-the-art neural networks for 3D object detection using a single-stage pipeline can be made safety aware.
The concept is detailed by extending the state-of-the-art PIXOR detector which creates object bounding boxes in bird's eye view with inputs from point clouds.
arXiv Detail & Related papers (2020-03-25T07:06:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.