Is the Rush to Machine Learning Jeopardizing Safety? Results of a Survey
- URL: http://arxiv.org/abs/2111.14324v1
- Date: Mon, 29 Nov 2021 04:53:39 GMT
- Title: Is the Rush to Machine Learning Jeopardizing Safety? Results of a Survey
- Authors: Mehrnoosh Askarpour, Alan Wassyng, Mark Lawford, Richard Paige, Zinovy
Diskin
- Abstract summary: Machine learning (ML) is finding its way into safety-critical systems (SCS)
Current safety standards and practice were not designed to cope with ML techniques.
- Score: 2.8348950186890467
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Machine learning (ML) is finding its way into safety-critical systems (SCS).
Current safety standards and practice were not designed to cope with ML
techniques, and it is difficult to be confident that SCSs that contain ML
components are safe. Our hypothesis was that there has been a rush to deploy ML
techniques at the expense of a thorough examination as to whether the use of ML
techniques introduces safety problems that we are not yet adequately able to
detect and mitigate against. We thus conducted a targeted literature survey to
determine the research effort that has been expended in applying ML to SCS
compared with that spent on evaluating the safety of SCSs that deploy ML
components. This paper presents the (surprising) results of the survey.
Related papers
- Nothing in Excess: Mitigating the Exaggerated Safety for LLMs via Safety-Conscious Activation Steering [56.92068213969036]
Safety alignment is indispensable for Large language models (LLMs) to defend threats from malicious instructions.
Recent researches reveal safety-aligned LLMs prone to reject benign queries due to the exaggerated safety issue.
We propose a Safety-Conscious Activation Steering (SCANS) method to mitigate the exaggerated safety concerns.
arXiv Detail & Related papers (2024-08-21T10:01:34Z) - Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation [98.02846901473697]
We propose ECSO (Eyes Closed, Safety On), a training-free protecting approach that exploits the inherent safety awareness of MLLMs.
ECSO generates safer responses via adaptively transforming unsafe images into texts to activate the intrinsic safety mechanism of pre-aligned LLMs.
arXiv Detail & Related papers (2024-03-14T17:03:04Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Unsolved Problems in ML Safety [45.82027272958549]
We present four problems ready for research, namely withstanding hazards, identifying hazards, steering ML systems, and reducing risks to how ML systems are handled.
We clarify each problem's motivation and provide concrete research directions.
arXiv Detail & Related papers (2021-09-28T17:59:36Z) - The Role of Explainability in Assuring Safety of Machine Learning in
Healthcare [9.462125772941347]
This paper identifies ways in which explainable AI methods can contribute to safety assurance of ML-based systems.
The results are also represented in a safety argument to show where, and in what way, explainable AI methods can contribute to a safety case.
arXiv Detail & Related papers (2021-09-01T09:32:14Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - White Paper Machine Learning in Certified Systems [70.24215483154184]
DEEL Project set-up the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup'ery de Toulouse (IRT)
arXiv Detail & Related papers (2021-03-18T21:14:30Z) - Robust Machine Learning Systems: Challenges, Current Trends,
Perspectives, and the Road Ahead [24.60052335548398]
Machine Learning (ML) techniques have been rapidly adopted by smart Cyber-Physical Systems (CPS) and Internet-of-Things (IoT)
They are vulnerable to various security and reliability threats, at both hardware and software levels, that compromise their accuracy.
This paper summarizes the prominent vulnerabilities of modern ML systems, highlights successful defenses and mitigation techniques against these vulnerabilities.
arXiv Detail & Related papers (2021-01-04T20:06:56Z) - Safety design concepts for statistical machine learning components
toward accordance with functional safety standards [0.38073142980732994]
In recent years, curial incidents and accidents have been reported due to misjudgment of statistical machine learning.
In this paper, we organize five kinds of technical safety concepts (TSCs) for components toward accordance with functional safety standards.
arXiv Detail & Related papers (2020-08-04T01:01:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.