Towards Probability-based Safety Verification of Systems with Components
from Machine Learning
- URL: http://arxiv.org/abs/2003.01155v2
- Date: Mon, 29 Jun 2020 18:45:48 GMT
- Title: Towards Probability-based Safety Verification of Systems with Components
from Machine Learning
- Authors: Hermann Kaindl and Stefan Kramer
- Abstract summary: Safety verification of machine learning systems is currently thought to be infeasible or, at least, very hard.
We think that it requires taking into account specific properties of ML technology such as: (i) Most ML approaches are inductive, which is both their power and their source of error.
We propose verification based on probabilities of errors both estimated by controlled experiments and output by the inductively learned itself.
- Score: 8.75682288556859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) has recently created many new success stories. Hence,
there is a strong motivation to use ML technology in software-intensive
systems, including safety-critical systems. This raises the issue of safety
verification of MLbased systems, which is currently thought to be infeasible
or, at least, very hard. We think that it requires taking into account specific
properties of ML technology such as: (i) Most ML approaches are inductive,
which is both their power and their source of error. (ii) Neural networks (NN)
resulting from deep learning are at the current state of the art not
transparent. Consequently, there will always be errors remaining and, at least
for deep NNs (DNNs), verification of their internal structure is extremely
hard. In general, safety engineering cannot provide full guarantees that no
harm will ever occur. That is why probabilities are used, e.g., for specifying
a risk or a Tolerable Hazard Rate (THR). In this vision paper, we propose
verification based on probabilities of errors both estimated by controlled
experiments and output by the inductively learned classifier itself.
Generalization error bounds may propagate to the probabilities of a hazard,
which must not exceed a THR. As a result, the quantitatively determined bound
on the probability of a classification error of an ML component in a
safety-critical system contributes in a well-defined way to the latter's
overall safety verification.
Related papers
- SafeBench: A Safety Evaluation Framework for Multimodal Large Language Models [75.67623347512368]
We propose toolns, a comprehensive framework designed for conducting safety evaluations of MLLMs.
Our framework consists of a comprehensive harmful query dataset and an automated evaluation protocol.
Based on our framework, we conducted large-scale experiments on 15 widely-used open-source MLLMs and 6 commercial MLLMs.
arXiv Detail & Related papers (2024-10-24T17:14:40Z) - Enhancing Trustworthiness in ML-Based Network Intrusion Detection with Uncertainty Quantification [0.0]
Intrusion Detection Systems (IDSs) are security devices designed to identify and mitigate attacks to modern networks.
Data-driven approaches based on Machine Learning (ML) have gained more and more popularity for executing the classification tasks.
However, typical ML models adopted for this purpose do not properly take into account the uncertainty associated with their prediction.
arXiv Detail & Related papers (2023-09-05T13:52:41Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Ensembling Uncertainty Measures to Improve Safety of Black-Box
Classifiers [3.130722489512822]
SPROUT is a Safety wraPper thROugh ensembles of UncertainTy measures.
It suspects misclassifications by computing uncertainty measures on the inputs and outputs of a black-box classifier.
The resulting impact on safety is that SPROUT transforms erratic outputs (misclassifications) into data omission failures.
arXiv Detail & Related papers (2023-08-23T11:24:28Z) - Identifying the Hazard Boundary of ML-enabled Autonomous Systems Using
Cooperative Co-Evolutionary Search [9.511076358998073]
It is essential to identify the hazard boundary of ML Components (MLCs) in the Machine Learning-enabled autonomous systems under analysis.
We propose MLCSHE, a novel method based on a Cooperative Co-Evolutionary Algorithm (CCEA)
We evaluate the effectiveness and efficiency of MLCSHE on a complex Autonomous Vehicle (AV) case study.
arXiv Detail & Related papers (2023-01-31T17:50:52Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - The Role of Explainability in Assuring Safety of Machine Learning in
Healthcare [9.462125772941347]
This paper identifies ways in which explainable AI methods can contribute to safety assurance of ML-based systems.
The results are also represented in a safety argument to show where, and in what way, explainable AI methods can contribute to a safety case.
arXiv Detail & Related papers (2021-09-01T09:32:14Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Robust Machine Learning Systems: Challenges, Current Trends,
Perspectives, and the Road Ahead [24.60052335548398]
Machine Learning (ML) techniques have been rapidly adopted by smart Cyber-Physical Systems (CPS) and Internet-of-Things (IoT)
They are vulnerable to various security and reliability threats, at both hardware and software levels, that compromise their accuracy.
This paper summarizes the prominent vulnerabilities of modern ML systems, highlights successful defenses and mitigation techniques against these vulnerabilities.
arXiv Detail & Related papers (2021-01-04T20:06:56Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.