Machine Learning for Reliability Engineering and Safety Applications:
Review of Current Status and Future Opportunities
- URL: http://arxiv.org/abs/2008.08221v1
- Date: Wed, 19 Aug 2020 02:08:56 GMT
- Title: Machine Learning for Reliability Engineering and Safety Applications:
Review of Current Status and Future Opportunities
- Authors: Zhaoyi Xu, Joseph Homer Saleh
- Abstract summary: Machine learning (ML) pervades an increasing number of academic disciplines and industries.
There is already a large but fragmented literature on ML for reliability and safety applications.
We argue that ML is capable of providing novel insights and opportunities to solve important challenges in reliability and safety applications.
- Score: 1.2183405753834562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) pervades an increasing number of academic disciplines
and industries. Its impact is profound, and several fields have been
fundamentally altered by it, autonomy and computer vision for example;
reliability engineering and safety will undoubtedly follow suit. There is
already a large but fragmented literature on ML for reliability and safety
applications, and it can be overwhelming to navigate and integrate into a
coherent whole. In this work, we facilitate this task by providing a synthesis
of, and a roadmap to this ever-expanding analytical landscape and highlighting
its major landmarks and pathways. We first provide an overview of the different
ML categories and sub-categories or tasks, and we note several of the
corresponding models and algorithms. We then look back and review the use of ML
in reliability and safety applications. We examine several publications in each
category/sub-category, and we include a short discussion on the use of Deep
Learning to highlight its growing popularity and distinctive advantages.
Finally, we look ahead and outline several promising future opportunities for
leveraging ML in service of advancing reliability and safety considerations.
Overall, we argue that ML is capable of providing novel insights and
opportunities to solve important challenges in reliability and safety
applications. It is also capable of teasing out more accurate insights from
accident datasets than with traditional analysis tools, and this in turn can
lead to better informed decision-making and more effective accident prevention.
Related papers
- Multimodal Situational Safety [73.63981779844916]
We present the first evaluation and analysis of a novel safety challenge termed Multimodal Situational Safety.
For an MLLM to respond safely, whether through language or action, it often needs to assess the safety implications of a language query within its corresponding visual context.
We develop the Multimodal Situational Safety benchmark (MSSBench) to assess the situational safety performance of current MLLMs.
arXiv Detail & Related papers (2024-10-08T16:16:07Z) - Unbridled Icarus: A Survey of the Potential Perils of Image Inputs in Multimodal Large Language Model Security [5.077261736366414]
The pursuit of reliable AI systems like powerful MLLMs has emerged as a pivotal area of contemporary research.
In this paper, we endeavor to demostrate the multifaceted risks associated with the incorporation of image modalities into MLLMs.
arXiv Detail & Related papers (2024-04-08T07:54:18Z) - Safety of Multimodal Large Language Models on Images and Texts [33.97489213223888]
In this paper, we systematically survey current efforts on the evaluation, attack, and defense of MLLMs' safety on images and text.
We review the evaluation datasets and metrics for measuring the safety of MLLMs.
Next, we comprehensively present attack and defense techniques related to MLLMs' safety.
arXiv Detail & Related papers (2024-02-01T05:57:10Z) - ChatSOS: LLM-based knowledge Q&A system for safety engineering [0.0]
This study introduces an LLM-based Q&A system for safety engineering, enhancing the comprehension and response accuracy of the model.
We employ prompt engineering to incorporate external knowledge databases, thus enriching the LLM with up-to-date and reliable information.
Our findings indicate that the integration of external knowledge significantly augments the capabilities of LLM for in-depth problem analysis and autonomous task assignment.
arXiv Detail & Related papers (2023-12-14T03:25:23Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - The Dark Side: Security Concerns in Machine Learning for EDA [29.20366952640125]
Many unprecedented efficient EDA methods have been enabled by machine learning (ML) techniques.
While ML demonstrates its great potential in circuit design, the dark side about security problems is seldomly discussed.
This paper gives a comprehensive and impartial summary of all security concerns we have observed in ML for EDA.
arXiv Detail & Related papers (2022-03-20T16:44:25Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.