Sustainability Through Cognition Aware Safety Systems -- Next Level
Human-Machine-Interaction
- URL: http://arxiv.org/abs/2110.07003v1
- Date: Wed, 13 Oct 2021 19:36:06 GMT
- Title: Sustainability Through Cognition Aware Safety Systems -- Next Level
Human-Machine-Interaction
- Authors: Juergen Mangler, Konrad Diwol, Dieter Etz, Stefanie Rinderle-Ma, Alois
Ferscha, Gerald Reiner, Wolfgang Kastner, Sebastien Bougain, Christoph
Pollak, Michael Haslgr\"ubler
- Abstract summary: Industrial Safety deals with the physical integrity of humans, machines and the environment when they interact during production scenarios.
The concept of a Cognition Aware Safety System (CASS) is to integrate AI based reasoning about human load, stress, and attention with AI based selection of actions to avoid the triggering of safety stops.
- Score: 1.847374743273972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Industrial Safety deals with the physical integrity of humans, machines and
the environment when they interact during production scenarios. Industrial
Safety is subject to a rigorous certification process that leads to inflexible
settings, in which all changes are forbidden. With the progressing introduction
of smart robotics and smart machinery to the factory floor, combined with an
increasing shortage of skilled workers, it becomes imperative that safety
scenarios incorporate a flexible handling of the boundary between humans,
machines and the environment. In order to increase the well-being of workers,
reduce accidents, and compensate for different skill sets, the configuration of
machines and the factory floor should be dynamically adapted, while still
enforcing functional safety requirements. The contribution of this paper is as
follows: (1) We present a set of three scenarios, and discuss how industrial
safety mechanisms could be augmented through dynamic changes to the work
environment in order to decrease potential accidents, and thus increase
productivity. (2) We introduce the concept of a Cognition Aware Safety System
(CASS) and its architecture. The idea behind CASS is to integrate AI based
reasoning about human load, stress, and attention with AI based selection of
actions to avoid the triggering of safety stops. (3) And finally, we will
describe the required performance measurement dimensions for a quantitative
performance measurement model to enable a comprehensive (triple bottom line)
impact assessment of CASS. Additionally we introduce a detailed guideline for
expert interviews to explore the feasibility of the approach for given
scenarios.
Related papers
- A Verification Methodology for Safety Assurance of Robotic Autonomous Systems [0.44241702149260353]
This paper presents a verification workflow for the safety assurance of an autonomous agricultural robot.<n>It covers the entire development life-cycle, from concept study and design to runtime verification.<n>Results show that the methodology can be effectively used to verify safety-critical properties and facilitate the early identification of design issues.
arXiv Detail & Related papers (2025-06-24T13:39:51Z) - Probabilistic modelling and safety assurance of an agriculture robot providing light-treatment [0.0]
Continued adoption of agricultural robots postulates the farmer's trust in the reliability, robustness and safety of the new technology.<n>This paper considers a probabilistic modelling and risk analysis framework for use in the early development phases.
arXiv Detail & Related papers (2025-06-24T13:39:32Z) - Towards provable probabilistic safety for scalable embodied AI systems [79.31011047593492]
Embodied AI systems are increasingly prevalent across various applications.<n> Ensuring their safety in complex operating environments remains a major challenge.<n>This Perspective offers a pathway toward safer, large-scale adoption of embodied AI systems in safety-critical applications.
arXiv Detail & Related papers (2025-06-05T15:46:25Z) - SafeAgent: Safeguarding LLM Agents via an Automated Risk Simulator [77.86600052899156]
Large Language Model (LLM)-based agents are increasingly deployed in real-world applications.<n>We propose AutoSafe, the first framework that systematically enhances agent safety through fully automated synthetic data generation.<n>We show that AutoSafe boosts safety scores by 45% on average and achieves a 28.91% improvement on real-world tasks.
arXiv Detail & Related papers (2025-05-23T10:56:06Z) - Engineering Risk-Aware, Security-by-Design Frameworks for Assurance of Large-Scale Autonomous AI Models [0.0]
This paper presents an enterprise-level, risk-aware, security-by-design approach for large-scale autonomous AI systems.<n>We detail a unified pipeline that delivers provable guarantees of model behavior under adversarial and operational stress.<n>Case studies in national security, open-source model governance, and industrial automation demonstrate measurable reductions in vulnerability and compliance overhead.
arXiv Detail & Related papers (2025-05-09T20:14:53Z) - Concept Enhancement Engineering: A Lightweight and Efficient Robust Defense Against Jailbreak Attacks in Embodied AI [19.094809384824064]
Embodied Intelligence (EI) systems integrated with large language models (LLMs) face significant security risks.
Traditional defense strategies, such as input filtering and output monitoring, often introduce high computational overhead.
We propose Concept Enhancement Engineering (CEE) to enhance the safety of embodied LLMs by dynamically steering their internal activations.
arXiv Detail & Related papers (2025-04-15T03:50:04Z) - Safe Explicable Policy Search [3.3869539907606603]
We present Safe Explicable Policy Search (SEPS), which aims to provide a learning approach to explicable behavior generation while minimizing the safety risk.
We formulate SEPS as a constrained optimization problem where the agent aims to maximize an explicability score subject to constraints on safety.
We evaluate SEPS in safety-gym environments and with a physical robot experiment to show that it can learn explicable behaviors that adhere to the agent's safety requirements and are efficient.
arXiv Detail & Related papers (2025-03-10T20:52:41Z) - SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Safe Reinforcement Learning [10.844235123282056]
We propose SafeVLA, a novel algorithm designed to integrate safety into vision-language--action models (VLAs)
SafeVLA balances safety and task performance by employing large-scale constrained learning within simulated environments.
We demonstrate that SafeVLA outperforms the current state-of-the-art method in both safety and task performance.
arXiv Detail & Related papers (2025-03-05T13:16:55Z) - Don't Let Your Robot be Harmful: Responsible Robotic Manipulation [57.70648477564976]
Unthinking execution of human instructions in robotic manipulation can lead to severe safety risks.
We present Safety-as-policy, which includes (i) a world model to automatically generate scenarios containing safety risks and conduct virtual interactions, and (ii) a mental model to infer consequences with reflections.
We show that Safety-as-policy can avoid risks and efficiently complete tasks in both synthetic dataset and real-world experiments.
arXiv Detail & Related papers (2024-11-27T12:27:50Z) - From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems [2.226040060318401]
We translate System Theoretic Process Analysis (STPA) for analyzing AI operation and development processes.
We focus on systems that rely on machine learning algorithms and conductedA on three case studies.
We find that key concepts and steps of conducting anA readily apply, albeit with a few adaptations tailored for AI systems.
arXiv Detail & Related papers (2024-10-29T20:43:18Z) - SafetyAnalyst: Interpretable, transparent, and steerable safety moderation for AI behavior [56.10557932893919]
We present SafetyAnalyst, a novel AI safety moderation framework.
Given an AI behavior, SafetyAnalyst uses chain-of-thought reasoning to analyze its potential consequences.
It aggregates all harmful and beneficial effects into a harmfulness score using fully interpretable weight parameters.
arXiv Detail & Related papers (2024-10-22T03:38:37Z) - SafeEmbodAI: a Safety Framework for Mobile Robots in Embodied AI Systems [5.055705635181593]
Embodied AI systems, including AI-powered robots that autonomously interact with the physical world, stand to be significantly advanced.
Improper safety management can lead to failures in complex environments and make the system vulnerable to malicious command injections.
We propose textitSafeEmbodAI, a safety framework for integrating mobile robots into embodied AI systems.
arXiv Detail & Related papers (2024-09-03T05:56:50Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Safety Control of Service Robots with LLMs and Embodied Knowledge Graphs [12.787160626087744]
We propose a novel integration of Large Language Models with Embodied Robotic Control Prompts (ERCPs) and Embodied Knowledge Graphs (EKGs)
ERCPs are designed as predefined instructions that ensure LLMs generate safe and precise responses.
EKGs provide a comprehensive knowledge base ensuring that the actions of the robot are continuously aligned with safety protocols.
arXiv Detail & Related papers (2024-05-28T05:50:25Z) - Welcome Your New AI Teammate: On Safety Analysis by Leashing Large Language Models [0.6699222582814232]
"Hazard Analysis & Risk Assessment" (HARA) is an essential step to start the safety requirements specification.
We propose a framework to support a higher degree of automation of HARA with Large Language Models (LLMs)
arXiv Detail & Related papers (2024-03-14T16:56:52Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.