Kernel-Based Learning of Safety Barriers
- URL: http://arxiv.org/abs/2601.12002v1
- Date: Sat, 17 Jan 2026 10:42:35 GMT
- Title: Kernel-Based Learning of Safety Barriers
- Authors: Oliver Schön, Zhengang Zhong, Sadegh Soudjani,
- Abstract summary: Rapid integration of AI algorithms in safety-critical applications is raising concerns about the ability to meet stringent safety standards.<n>Traditional tools for formal safety verification struggle with the black-box nature of AI-driven systems.<n>We present a data-driven approach for safety verification and synthesis of black-box systems with discrete-time dynamics.
- Score: 0.9367224590861915
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rapid integration of AI algorithms in safety-critical applications such as autonomous driving and healthcare is raising significant concerns about the ability to meet stringent safety standards. Traditional tools for formal safety verification struggle with the black-box nature of AI-driven systems and lack the flexibility needed to scale to the complexity of real-world applications. In this paper, we present a data-driven approach for safety verification and synthesis of black-box systems with discrete-time stochastic dynamics. We employ the concept of control barrier certificates, which can guarantee safety of the system, and learn the certificate directly from a set of system trajectories. We use conditional mean embeddings to embed data from the system into a reproducing kernel Hilbert space (RKHS) and construct an RKHS ambiguity set that can be inflated to robustify the result to out-of-distribution behavior. We provide the theoretical results on how to apply the approach to general classes of temporal logic specifications beyond safety. For the data-driven computation of safety barriers, we leverage a finite Fourier expansion to cast a typically intractable semi-infinite optimization problem as a linear program. The resulting spectral barrier allows us to leverage the fast Fourier transform to generate the relaxed problem efficiently, offering a scalable yet distributionally robust framework for verifying safety. Our work moves beyond restrictive assumptions on system dynamics and uncertainty, as demonstrated on two case studies including a black-box system with a neural network controller.
Related papers
- BarrierSteer: LLM Safety via Learning Barrier Steering [83.12893815611052]
BarrierSteer is a novel framework that formalizes safety by embedding learned non-linear safety constraints directly into the model's latent representation space.<n>We show that BarrierSteer substantially reduces adversarial success rates, decreases unsafe generations, and outperforms existing methods.
arXiv Detail & Related papers (2026-02-23T18:19:46Z) - Safe Reinforcement Learning via Recovery-based Shielding with Gaussian Process Dynamics Models [57.006252510102506]
Reinforcement learning (RL) is a powerful framework for optimal decision-making and control but often lacks provable guarantees for safety-critical applications.<n>We introduce a novel recovery-based shielding framework that enables safe RL with a provable safety lower bound for unknown and non-linear continuous dynamical systems.
arXiv Detail & Related papers (2026-02-12T22:03:35Z) - LUCID: Learning-Enabled Uncertainty-Aware Certification of Stochastic Dynamical Systems [0.8574682463936006]
Traditional formal verification tools fall short when faced with systems that embed both opaque black-box AI components and complex dynamics.<n>We introduce LUCID, a verification engine for certifying safety of black-box embedding dynamical systems.<n> LUCID is the first known tool capable of establishing quantified safety guarantees for such systems.
arXiv Detail & Related papers (2025-12-12T17:46:50Z) - Towards provable probabilistic safety for scalable embodied AI systems [79.31011047593492]
Embodied AI systems are increasingly prevalent across various applications.<n> Ensuring their safety in complex operating environments remains a major challenge.<n>This Perspective offers a pathway toward safer, large-scale adoption of embodied AI systems in safety-critical applications.
arXiv Detail & Related papers (2025-06-05T15:46:25Z) - Safely Learning Controlled Stochastic Dynamics [61.82896036131116]
We introduce a method that ensures safe exploration and efficient estimation of system dynamics.<n>After training, the learned model enables predictions of the system's dynamics and permits safety verification of any given control.<n>We provide theoretical guarantees for safety and derive adaptive learning rates that improve with increasing Sobolev regularity of the true dynamics.
arXiv Detail & Related papers (2025-06-03T11:17:07Z) - Data-Driven Distributionally Robust Safety Verification Using Barrier Certificates and Conditional Mean Embeddings [0.24578723416255752]
We develop scalable formal verification algorithms without shifting the problem to unrealistic assumptions.
In a pursuit of developing scalable formal verification algorithms without shifting the problem to unrealistic assumptions, we employ the concept of barrier certificates.
We show how to solve the resulting program efficiently using sum-of-squares optimization and a Gaussian process envelope.
arXiv Detail & Related papers (2024-03-15T17:32:02Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Safety Certification for Stochastic Systems via Neural Barrier Functions [3.7491936479803054]
barrier functions can be used to provide non-trivial certificates of safety for non-linear systems.
We parameterize a barrier function as a neural network and show that robust training of neural networks can be successfully employed to find barrier functions.
We show that our approach outperforms existing methods in several case studies and often returns certificates of safety that are orders of magnitude larger.
arXiv Detail & Related papers (2022-06-03T09:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.