Formal Verification of End-to-End Learning in Cyber-Physical Systems:
Progress and Challenges
- URL: http://arxiv.org/abs/2006.09181v1
- Date: Mon, 15 Jun 2020 16:50:47 GMT
- Title: Formal Verification of End-to-End Learning in Cyber-Physical Systems:
Progress and Challenges
- Authors: Nathan Fulton, Nathan Hunt, Nghia Hoang, Subhro Das
- Abstract summary: It might be possible to ensure the safety of autonomous systems by constructing formal, computer-checked correctness.
This paper identifies three assumptions underlying existing formal verification techniques.
It summarizes preliminary work toward improving the strength of evidence provided by formal verification.
- Score: 7.955313479061443
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous systems -- such as self-driving cars, autonomous drones, and
automated trains -- must come with strong safety guarantees. Over the past
decade, techniques based on formal methods have enjoyed some success in
providing strong correctness guarantees for large software systems including
operating system kernels, cryptographic protocols, and control software for
drones. These successes suggest it might be possible to ensure the safety of
autonomous systems by constructing formal, computer-checked correctness proofs.
This paper identifies three assumptions underlying existing formal verification
techniques, explains how each of these assumptions limits the applicability of
verification in autonomous systems, and summarizes preliminary work toward
improving the strength of evidence provided by formal verification.
Related papers
- Towards Formal Fault Injection for Safety Assessment of Automated
Systems [0.0]
This paper introduces formal fault injection, a fusion of these two techniques throughout the development lifecycle.
We advocate for a more cohesive approach by identifying five areas of mutual support between formal methods and fault injection.
arXiv Detail & Related papers (2023-11-16T11:34:18Z) - Testing learning-enabled cyber-physical systems with Large-Language Models: A Formal Approach [32.15663640443728]
The integration of machine learning (ML) into cyber-physical systems (CPS) offers significant benefits.
Existing verification and validation techniques are often inadequate for these new paradigms.
We propose a roadmap to transition from foundational probabilistic testing to a more rigorous approach capable of delivering formal assurance.
arXiv Detail & Related papers (2023-11-13T14:56:14Z) - Formal Methods for Autonomous Systems [21.989467515686858]
Formal methods have played a key role in establishing the correctness of safety-critical systems.
Main building blocks of formal methods are models and specifications.
We consider correct-by-construction synthesis under various formulations.
arXiv Detail & Related papers (2023-11-02T14:18:43Z) - Simulation-based Safety Assurance for an AVP System incorporating
Learning-Enabled Components [0.6526824510982802]
Testing, verification and validation AD/ADAS safety-critical applications remain as one the main challenges.
We explain the simulation-based development platform that is designed to verify and validate safety-critical learning-enabled systems.
arXiv Detail & Related papers (2023-09-28T09:00:31Z) - A General Framework for Verification and Control of Dynamical Models via Certificate Synthesis [54.959571890098786]
We provide a framework to encode system specifications and define corresponding certificates.
We present an automated approach to formally synthesise controllers and certificates.
Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks.
arXiv Detail & Related papers (2023-09-12T09:37:26Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Lessons from Formally Verified Deployed Software Systems (Extended version) [65.69802414600832]
This article examines a range of projects, in various application areas, that have produced formally verified systems and deployed them for actual use.
It considers the technologies used, the form of verification applied, the results obtained, and the lessons that the software industry should draw regarding its ability to benefit from formal verification techniques and tools.
arXiv Detail & Related papers (2023-01-05T18:18:46Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - A Review of Formal Methods applied to Machine Learning [0.6853165736531939]
We review state-of-the-art formal methods applied to the emerging field of the verification of machine learning systems.
We first recall established formal methods and their current use in an exemplar safety-critical field, avionic software.
We then provide a comprehensive and detailed review of the formal methods developed so far for machine learning, highlighting their strengths and limitations.
arXiv Detail & Related papers (2021-04-06T12:48:17Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.