Formal Verification of Neural Certificates Done Dynamically
- URL: http://arxiv.org/abs/2507.11987v1
- Date: Wed, 16 Jul 2025 07:37:23 GMT
- Title: Formal Verification of Neural Certificates Done Dynamically
- Authors: Thomas A. Henzinger, Konstantin Kueffner, Emily Yu,
- Abstract summary: We propose a lightweight runtime monitoring framework that integrates real-time verification and does not require access to the underlying control policy.<n>Our approach enables timely detection of safety violations and incorrect certificates with minimal overhead.
- Score: 7.146556437126553
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural certificates have emerged as a powerful tool in cyber-physical systems control, providing witnesses of correctness. These certificates, such as barrier functions, often learned alongside control policies, once verified, serve as mathematical proofs of system safety. However, traditional formal verification of their defining conditions typically faces scalability challenges due to exhaustive state-space exploration. To address this challenge, we propose a lightweight runtime monitoring framework that integrates real-time verification and does not require access to the underlying control policy. Our monitor observes the system during deployment and performs on-the-fly verification of the certificate over a lookahead region to ensure safety within a finite prediction horizon. We instantiate this framework for ReLU-based control barrier functions and demonstrate its practical effectiveness in a case study. Our approach enables timely detection of safety violations and incorrect certificates with minimal overhead, providing an effective but lightweight alternative to the static verification of the certificates.
Related papers
- Verification of Visual Controllers via Compositional Geometric Transformations [49.81690518952909]
We introduce a novel verification framework for perception-based controllers that can generate outer-approximations of reachable sets.<n>We provide theoretical guarantees on the soundness of our method and demonstrate its effectiveness across benchmark control environments.
arXiv Detail & Related papers (2025-07-06T20:22:58Z) - Learning Verifiable Control Policies Using Relaxed Verification [49.81690518952909]
This work proposes to perform verification throughout training to aim for policies whose properties can be evaluated throughout runtime.<n>The approach is to use differentiable reachability analysis and incorporate new components into the loss function.
arXiv Detail & Related papers (2025-04-23T16:54:35Z) - Formal Verification of Permission Voucher [1.4732811715354452]
The Permission Voucher Protocol is a system designed for secure and authenticated access control in distributed environments.<n>The analysis employs the Tamarin Prover, a state-of-the-art tool for symbolic verification, to evaluate key security properties.<n>Results confirm the protocol's robustness against common attacks such as message tampering, impersonation, and replay.
arXiv Detail & Related papers (2024-12-18T14:11:50Z) - Formally Verifying Deep Reinforcement Learning Controllers with Lyapunov Barrier Certificates [2.0611129932396164]
We present a novel method for training and verifying NLB-based certificates for discrete-time systems.
Specifically, we introduce a technique for certificate composition, which simplifies the verification of highly-complex systems.
We demonstrate the merits of our approach with a case study on providing safety and liveness guarantees for a DRL-controlled spacecraft.
arXiv Detail & Related papers (2024-05-22T23:06:34Z) - Transfer of Safety Controllers Through Learning Deep Inverse Dynamics Model [4.7962647777554634]
Control barrier certificates have proven effective in formally guaranteeing the safety of the control systems.
Design of a control barrier certificate is a time-consuming and computationally expensive endeavor.
We propose a validity condition that, when met, guarantees correctness of the controller.
arXiv Detail & Related papers (2024-05-22T15:28:43Z) - Safe Online Dynamics Learning with Initially Unknown Models and
Infeasible Safety Certificates [45.72598064481916]
This paper considers a learning-based setting with a robust safety certificate based on a control barrier function (CBF) second-order cone program.
If the control barrier function certificate is feasible, our approach leverages it to guarantee safety. Otherwise, our method explores the system dynamics to collect data and recover the feasibility of the control barrier function constraint.
arXiv Detail & Related papers (2023-11-03T14:23:57Z) - A General Framework for Verification and Control of Dynamical Models via Certificate Synthesis [54.959571890098786]
We provide a framework to encode system specifications and define corresponding certificates.
We present an automated approach to formally synthesise controllers and certificates.
Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks.
arXiv Detail & Related papers (2023-09-12T09:37:26Z) - Value Functions are Control Barrier Functions: Verification of Safe
Policies using Control Theory [46.85103495283037]
We propose a new approach to apply verification methods from control theory to learned value functions.
We formalize original theorems that establish links between value functions and control barrier functions.
Our work marks a significant step towards a formal framework for the general, scalable, and verifiable design of RL-based control systems.
arXiv Detail & Related papers (2023-06-06T21:41:31Z) - FI-ODE: Certifiably Robust Forward Invariance in Neural ODEs [34.762005448725226]
We propose a general framework for training and provably certifying robust forward invariance in Neural ODEs.
We apply this framework to provide certified safety in robust continuous control.
In addition, we explore the generality of our framework by using it to certify adversarial robustness for image classification.
arXiv Detail & Related papers (2022-10-30T20:30:19Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.