Sustainability Through Cognition Aware Safety Systems -- Next Level
Human-Machine-Interaction
- URL: http://arxiv.org/abs/2110.07003v1
- Date: Wed, 13 Oct 2021 19:36:06 GMT
- Title: Sustainability Through Cognition Aware Safety Systems -- Next Level
Human-Machine-Interaction
- Authors: Juergen Mangler, Konrad Diwol, Dieter Etz, Stefanie Rinderle-Ma, Alois
Ferscha, Gerald Reiner, Wolfgang Kastner, Sebastien Bougain, Christoph
Pollak, Michael Haslgr\"ubler
- Abstract summary: Industrial Safety deals with the physical integrity of humans, machines and the environment when they interact during production scenarios.
The concept of a Cognition Aware Safety System (CASS) is to integrate AI based reasoning about human load, stress, and attention with AI based selection of actions to avoid the triggering of safety stops.
- Score: 1.847374743273972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Industrial Safety deals with the physical integrity of humans, machines and
the environment when they interact during production scenarios. Industrial
Safety is subject to a rigorous certification process that leads to inflexible
settings, in which all changes are forbidden. With the progressing introduction
of smart robotics and smart machinery to the factory floor, combined with an
increasing shortage of skilled workers, it becomes imperative that safety
scenarios incorporate a flexible handling of the boundary between humans,
machines and the environment. In order to increase the well-being of workers,
reduce accidents, and compensate for different skill sets, the configuration of
machines and the factory floor should be dynamically adapted, while still
enforcing functional safety requirements. The contribution of this paper is as
follows: (1) We present a set of three scenarios, and discuss how industrial
safety mechanisms could be augmented through dynamic changes to the work
environment in order to decrease potential accidents, and thus increase
productivity. (2) We introduce the concept of a Cognition Aware Safety System
(CASS) and its architecture. The idea behind CASS is to integrate AI based
reasoning about human load, stress, and attention with AI based selection of
actions to avoid the triggering of safety stops. (3) And finally, we will
describe the required performance measurement dimensions for a quantitative
performance measurement model to enable a comprehensive (triple bottom line)
impact assessment of CASS. Additionally we introduce a detailed guideline for
expert interviews to explore the feasibility of the approach for given
scenarios.
Related papers
- Safety Control of Service Robots with LLMs and Embodied Knowledge Graphs [12.787160626087744]
We propose a novel integration of Large Language Models with Embodied Robotic Control Prompts (ERCPs) and Embodied Knowledge Graphs (EKGs)
ERCPs are designed as predefined instructions that ensure LLMs generate safe and precise responses.
EKGs provide a comprehensive knowledge base ensuring that the actions of the robot are continuously aligned with safety protocols.
arXiv Detail & Related papers (2024-05-28T05:50:25Z) - Welcome Your New AI Teammate: On Safety Analysis by Leashing Large Language Models [0.6699222582814232]
"Hazard Analysis & Risk Assessment" (HARA) is an essential step to start the safety requirements specification.
We propose a framework to support a higher degree of automation of HARA with Large Language Models (LLMs)
arXiv Detail & Related papers (2024-03-14T16:56:52Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - Safety Margins for Reinforcement Learning [74.13100479426424]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - A Model Based Framework for Testing Safety and Security in Operational
Technology Environments [0.46040036610482665]
We propose a model-based testing approach which we consider a promising way to analyze the safety and security behavior of a system under test.
The structure of the underlying framework is divided into four parts, according to the critical factors in testing of operational technology environments.
arXiv Detail & Related papers (2023-06-22T05:37:09Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Assurance Cases as Foundation Stone for Auditing AI-enabled and
Autonomous Systems: Workshop Results and Political Recommendations for Action
from the ExamAI Project [2.741266294612776]
We investigate the way safety standards define safety measures to be implemented against software faults.
Functional safety standards use Safety Integrity Levels (SILs) to define which safety measures shall be implemented.
We propose the use of assurance cases to argue that the individually selected and applied measures are sufficient.
arXiv Detail & Related papers (2022-08-17T10:05:07Z) - Modeling and mitigation of occupational safety risks in dynamic
industrial environments [0.0]
This article proposes a method to enable continuous and quantitative assessment of safety risks in a data-driven manner.
A fully Bayesian approach is developed to calibrate this model from safety data in an online fashion.
The proposed model can be leveraged for automated decision making.
arXiv Detail & Related papers (2022-05-02T13:04:25Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.