Testing learning-enabled cyber-physical systems with Large-Language Models: A Formal Approach
- URL: http://arxiv.org/abs/2311.07377v3
- Date: Thu, 16 May 2024 04:25:13 GMT
- Title: Testing learning-enabled cyber-physical systems with Large-Language Models: A Formal Approach
- Authors: Xi Zheng, Aloysius K. Mok, Ruzica Piskac, Yong Jae Lee, Bhaskar Krishnamachari, Dakai Zhu, Oleg Sokolsky, Insup Lee,
- Abstract summary: The integration of machine learning (ML) into cyber-physical systems (CPS) offers significant benefits.
Existing verification and validation techniques are often inadequate for these new paradigms.
We propose a roadmap to transition from foundational probabilistic testing to a more rigorous approach capable of delivering formal assurance.
- Score: 32.15663640443728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The integration of machine learning (ML) into cyber-physical systems (CPS) offers significant benefits, including enhanced efficiency, predictive capabilities, real-time responsiveness, and the enabling of autonomous operations. This convergence has accelerated the development and deployment of a range of real-world applications, such as autonomous vehicles, delivery drones, service robots, and telemedicine procedures. However, the software development life cycle (SDLC) for AI-infused CPS diverges significantly from traditional approaches, featuring data and learning as two critical components. Existing verification and validation techniques are often inadequate for these new paradigms. In this study, we pinpoint the main challenges in ensuring formal safety for learningenabled CPS.We begin by examining testing as the most pragmatic method for verification and validation, summarizing the current state-of-the-art methodologies. Recognizing the limitations in current testing approaches to provide formal safety guarantees, we propose a roadmap to transition from foundational probabilistic testing to a more rigorous approach capable of delivering formal assurance.
Related papers
- Using Formal Models, Safety Shields and Certified Control to Validate AI-Based Train Systems [0.5249805590164903]
The KI-LOK project explores new methods for certifying and safely integrating AI components into autonomous trains.
We pursue a two-layered approach: (1) ensuring the safety of the steering system by formal analysis using the B method, and (2) improving the reliability of the perception system with a runtime certificate checker.
This work links both strategies within a demonstrator that runs simulations on the formal model, controlled by the real AI output and the real certificate checker.
arXiv Detail & Related papers (2024-11-21T18:09:04Z) - Self-consistent Validation for Machine Learning Electronic Structure [81.54661501506185]
Method integrates machine learning with self-consistent field methods to achieve both low validation cost and interpret-ability.
This, in turn, enables exploration of the model's ability with active learning and instills confidence in its integration into real-world studies.
arXiv Detail & Related papers (2024-02-15T18:41:35Z) - Simulation-based Safety Assurance for an AVP System incorporating
Learning-Enabled Components [0.6526824510982802]
Testing, verification and validation AD/ADAS safety-critical applications remain as one the main challenges.
We explain the simulation-based development platform that is designed to verify and validate safety-critical learning-enabled systems.
arXiv Detail & Related papers (2023-09-28T09:00:31Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Constrained Reinforcement Learning for Robotics via Scenario-Based
Programming [64.07167316957533]
It is crucial to optimize the performance of DRL-based agents while providing guarantees about their behavior.
This paper presents a novel technique for incorporating domain-expert knowledge into a constrained DRL training loop.
Our experiments demonstrate that using our approach to leverage expert knowledge dramatically improves the safety and the performance of the agent.
arXiv Detail & Related papers (2022-06-20T07:19:38Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - Assurance Monitoring of Learning Enabled Cyber-Physical Systems Using
Inductive Conformal Prediction based on Distance Learning [2.66512000865131]
We propose an approach for assurance monitoring of learning-enabled Cyber-Physical Systems.
In order to allow real-time assurance monitoring, the approach employs distance learning to transform high-dimensional inputs into lower size embedding representations.
We demonstrate the approach using three data sets of mobile robot following a wall, speaker recognition, and traffic sign recognition.
arXiv Detail & Related papers (2021-10-07T00:21:45Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Efficient falsification approach for autonomous vehicle validation using
a parameter optimisation technique based on reinforcement learning [6.198523595657983]
The widescale deployment of Autonomous Vehicles (AV) appears to be imminent despite many safety challenges that are yet to be resolved.
The uncertainties in the behaviour of the traffic participants and the dynamic world cause reactions in advanced autonomous systems.
This paper presents an efficient falsification method to evaluate the System Under Test.
arXiv Detail & Related papers (2020-11-16T02:56:13Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - A Survey of Algorithms for Black-Box Safety Validation of Cyber-Physical
Systems [30.638615396429536]
Motivated by the prevalence of safety-critical artificial intelligence, this work provides a survey of state-of-the-art safety validation techniques for CPS.
We present and discuss algorithms in the domains of optimization, path planning, reinforcement learning, and importance sampling.
A brief overview of safety-critical applications is given, including autonomous vehicles and aircraft collision avoidance systems.
arXiv Detail & Related papers (2020-05-06T17:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.