Welcome Your New AI Teammate: On Safety Analysis by Leashing Large Language Models
- URL: http://arxiv.org/abs/2403.09565v1
- Date: Thu, 14 Mar 2024 16:56:52 GMT
- Title: Welcome Your New AI Teammate: On Safety Analysis by Leashing Large Language Models
- Authors: Ali Nouri, Beatriz Cabrero-Daniel, Fredrik Törner, Hȧkan Sivencrona, Christian Berger,
- Abstract summary: "Hazard Analysis & Risk Assessment" (HARA) is an essential step to start the safety requirements specification.
We propose a framework to support a higher degree of automation of HARA with Large Language Models (LLMs)
- Score: 0.6699222582814232
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: DevOps is a necessity in many industries, including the development of Autonomous Vehicles. In those settings, there are iterative activities that reduce the speed of SafetyOps cycles. One of these activities is "Hazard Analysis & Risk Assessment" (HARA), which is an essential step to start the safety requirements specification. As a potential approach to increase the speed of this step in SafetyOps, we have delved into the capabilities of Large Language Models (LLMs). Our objective is to systematically assess their potential for application in the field of safety engineering. To that end, we propose a framework to support a higher degree of automation of HARA with LLMs. Despite our endeavors to automate as much of the process as possible, expert review remains crucial to ensure the validity and correctness of the analysis results, with necessary modifications made accordingly.
Related papers
- AI for DevSecOps: A Landscape and Future Opportunities [6.513361705307775]
We analyzed 99 research papers spanning from 2017 to 2023.
We identified 12 tasks associated with the DevOps process and reviewed existing AI-driven security approaches.
We discovered 15 challenges encountered by existing AI-driven security approaches.
arXiv Detail & Related papers (2024-04-07T07:24:58Z) - Engineering Safety Requirements for Autonomous Driving with Large Language Models [0.6699222582814232]
Large Language Models (LLMs) can play a key role in automatically refining and decomposing requirements after each update.
This study proposes a prototype of a pipeline of prompts and LLMs that receives an item definition and outputs solutions in the form of safety requirements.
arXiv Detail & Related papers (2024-03-24T20:40:51Z) - Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics [54.57914943017522]
We highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark [13.082034905010286]
We present an environment suite called Safety-Gymnasium, which encompasses safety-critical tasks in both single and multi-agent scenarios.
We offer a library of algorithms named Safe Policy Optimization (SafePO), comprising 16 state-of-the-art SafeRL algorithms.
arXiv Detail & Related papers (2023-10-19T08:19:28Z) - Safety Assessment of Chinese Large Language Models [51.83369778259149]
Large language models (LLMs) may generate insulting and discriminatory content, reflect incorrect social values, and may be used for malicious purposes.
To promote the deployment of safe, responsible, and ethical AI, we release SafetyPrompts including 100k augmented prompts and responses by LLMs.
arXiv Detail & Related papers (2023-04-20T16:27:35Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Sustainability Through Cognition Aware Safety Systems -- Next Level
Human-Machine-Interaction [1.847374743273972]
Industrial Safety deals with the physical integrity of humans, machines and the environment when they interact during production scenarios.
The concept of a Cognition Aware Safety System (CASS) is to integrate AI based reasoning about human load, stress, and attention with AI based selection of actions to avoid the triggering of safety stops.
arXiv Detail & Related papers (2021-10-13T19:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.