A General Framework for Verification and Control of Dynamical Models via Certificate Synthesis
- URL: http://arxiv.org/abs/2309.06090v3
- Date: Mon, 28 Oct 2024 13:50:50 GMT
- Title: A General Framework for Verification and Control of Dynamical Models via Certificate Synthesis
- Authors: Alec Edwards, Andrea Peruffo, Alessandro Abate,
- Abstract summary: We provide a framework to encode system specifications and define corresponding certificates.
We present an automated approach to formally synthesise controllers and certificates.
Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks.
- Score: 54.959571890098786
- License:
- Abstract: An emerging branch of control theory specialises in certificate learning, concerning the specification of a desired (possibly complex) system behaviour for an autonomous or control model, which is then analytically verified by means of a function-based proof. However, the synthesis of controllers abiding by these complex requirements is in general a non-trivial task and may elude the most expert control engineers. This results in a need for automatic techniques that are able to design controllers and to analyse a wide range of elaborate specifications. In this paper, we provide a general framework to encode system specifications and define corresponding certificates, and we present an automated approach to formally synthesise controllers and certificates. Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks to provide candidate control and certificate functions, whilst using SMT-solvers to offer a formal guarantee of correctness. We test our framework by developing a prototype software tool, and assess its efficacy at verification via control and certificate synthesis over a large and varied suite of benchmarks.
Related papers
- Using Formal Models, Safety Shields and Certified Control to Validate AI-Based Train Systems [0.5249805590164903]
The KI-LOK project explores new methods for certifying and safely integrating AI components into autonomous trains.
We pursue a two-layered approach: (1) ensuring the safety of the steering system by formal analysis using the B method, and (2) improving the reliability of the perception system with a runtime certificate checker.
This work links both strategies within a demonstrator that runs simulations on the formal model, controlled by the real AI output and the real certificate checker.
arXiv Detail & Related papers (2024-11-21T18:09:04Z) - Transfer of Safety Controllers Through Learning Deep Inverse Dynamics Model [4.7962647777554634]
Control barrier certificates have proven effective in formally guaranteeing the safety of the control systems.
Design of a control barrier certificate is a time-consuming and computationally expensive endeavor.
We propose a validity condition that, when met, guarantees correctness of the controller.
arXiv Detail & Related papers (2024-05-22T15:28:43Z) - Fine-Tuning Language Models Using Formal Methods Feedback [53.24085794087253]
We present a fully automated approach to fine-tune pre-trained language models for applications in autonomous systems.
The method synthesizes automaton-based controllers from pre-trained models guided by natural language task descriptions.
The results indicate an improvement in percentage of specifications satisfied by the controller from 60% to 90%.
arXiv Detail & Related papers (2023-10-27T16:24:24Z) - Value Functions are Control Barrier Functions: Verification of Safe
Policies using Control Theory [46.85103495283037]
We propose a new approach to apply verification methods from control theory to learned value functions.
We formalize original theorems that establish links between value functions and control barrier functions.
Our work marks a significant step towards a formal framework for the general, scalable, and verifiable design of RL-based control systems.
arXiv Detail & Related papers (2023-06-06T21:41:31Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Safety Verification of Model Based Reinforcement Learning Controllers [7.407039316561176]
We present a novel safety verification framework for model-based RL controllers using reachable set analysis.
The proposed frame-work can efficiently handle models and controllers which are represented using neural networks.
arXiv Detail & Related papers (2020-10-21T03:35:28Z) - Learning Stability Certificates from Data [19.381365606166725]
We develop algorithms for learning certificate functions only from trajectory data.
We convert such generalization error bounds into global stability guarantees.
We demonstrate empirically that certificates for complex dynamics can be efficiently learned.
arXiv Detail & Related papers (2020-08-13T14:58:42Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.