A Safety Framework for Critical Systems Utilising Deep Neural Networks
- URL: http://arxiv.org/abs/2003.05311v3
- Date: Sat, 6 Jun 2020 10:49:23 GMT
- Title: A Safety Framework for Critical Systems Utilising Deep Neural Networks
- Authors: Xingyu Zhao, Alec Banks, James Sharp, Valentin Robu, David Flynn,
Michael Fisher, Xiaowei Huang
- Abstract summary: This paper presents a principled novel safety argument framework for critical systems that utilise deep neural networks.
The approach allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level.
It is supported by a Bayesian analysis using operational data and the recent verification and validation techniques for deep learning.
- Score: 13.763070043077633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasingly sophisticated mathematical modelling processes from Machine
Learning are being used to analyse complex data. However, the performance and
explainability of these models within practical critical systems requires a
rigorous and continuous verification of their safe utilisation. Working towards
addressing this challenge, this paper presents a principled novel safety
argument framework for critical systems that utilise deep neural networks. The
approach allows various forms of predictions, e.g., future reliability of
passing some demands, or confidence on a required reliability level. It is
supported by a Bayesian analysis using operational data and the recent
verification and validation techniques for deep learning. The prediction is
conservative -- it starts with partial prior knowledge obtained from lifecycle
activities and then determines the worst-case prediction. Open challenges are
also identified.
Related papers
- Towards a Framework for Deep Learning Certification in Safety-Critical Applications Using Inherently Safe Design and Run-Time Error Detection [0.0]
We consider real-world problems arising in aviation and other safety-critical areas, and investigate their requirements for a certified model.
We establish a new framework towards deep learning certification based on (i) inherently safe design, and (ii) run-time error detection.
arXiv Detail & Related papers (2024-03-12T11:38:45Z) - NeuralSentinel: Safeguarding Neural Network Reliability and
Trustworthiness [0.0]
We present NeuralSentinel (NS), a tool able to validate the reliability and trustworthiness of AI models.
NS help non-expert staff increase their confidence in this new system by understanding the model decisions.
This tool was deployed and used in a Hackathon event to evaluate the reliability of a skin cancer image detector.
arXiv Detail & Related papers (2024-02-12T09:24:34Z) - Surrogate Neural Networks Local Stability for Aircraft Predictive Maintenance [1.6703148532130556]
Surrogate Neural Networks are routinely used in industry as substitutes for computationally demanding engineering simulations.
Due to their performance and time-efficiency, these surrogate models are now being developed for use in safety-critical applications.
arXiv Detail & Related papers (2024-01-11T21:04:28Z) - Building Safe and Reliable AI systems for Safety Critical Tasks with
Vision-Language Processing [1.2183405753834557]
Current AI algorithms are unable to identify common causes for failure detection.
Additional techniques are required to quantify the quality of predictions.
This thesis will focus on vision-language data processing for tasks like classification, image captioning, and vision question answering.
arXiv Detail & Related papers (2023-08-06T18:05:59Z) - Adversarial Robustness of Deep Neural Networks: A Survey from a Formal
Verification Perspective [7.821877331499578]
Adversarial robustness, which concerns the reliability of a neural network when dealing with maliciously manipulated inputs, is one of the hottest topics in security and machine learning.
We survey existing literature in adversarial robustness verification for neural networks and collect 39 diversified research works across machine learning, security, and software engineering domains.
We provide a taxonomy from a formal verification perspective for a comprehensive understanding of this topic.
arXiv Detail & Related papers (2022-06-24T11:53:12Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.