Towards Risk Modeling for Collaborative AI
- URL: http://arxiv.org/abs/2103.07460v1
- Date: Fri, 12 Mar 2021 18:53:06 GMT
- Title: Towards Risk Modeling for Collaborative AI
- Authors: Matteo Camilli, Michael Felderer, Andrea Giusti, Dominik T. Matt, Anna
Perini, Barbara Russo, Angelo Susi
- Abstract summary: Collaborative AI systems aim at working together with humans in a shared space to achieve a common goal.
This setting imposes potentially hazardous circumstances due to contacts that could harm human beings.
We introduce a risk modeling approach tailored to Collaborative AI systems.
- Score: 5.941104748966331
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative AI systems aim at working together with humans in a shared
space to achieve a common goal. This setting imposes potentially hazardous
circumstances due to contacts that could harm human beings. Thus, building such
systems with strong assurances of compliance with requirements domain specific
standards and regulations is of greatest importance. Challenges associated with
the achievement of this goal become even more severe when such systems rely on
machine learning components rather than such as top-down rule-based AI. In this
paper, we introduce a risk modeling approach tailored to Collaborative AI
systems. The risk model includes goals, risk events and domain specific
indicators that potentially expose humans to hazards. The risk model is then
leveraged to drive assurance methods that feed in turn the risk model through
insights extracted from run-time evidence. Our envisioned approach is described
by means of a running example in the domain of Industry 4.0, where a robotic
arm endowed with a visual perception component, implemented with machine
learning, collaborates with a human operator for a production-relevant task.
Related papers
- Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics [70.93622520400385]
This paper systematically quantifies the robustness of VLA-based robotic systems.
We introduce an untargeted position-aware attack objective that leverages spatial foundations to destabilize robotic actions.
We also design an adversarial patch generation approach that places a small, colorful patch within the camera's view, effectively executing the attack in both digital and physical environments.
arXiv Detail & Related papers (2024-11-18T01:52:20Z) - HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions [76.42274173122328]
We present HAICOSYSTEM, a framework examining AI agent safety within diverse and complex social interactions.
We run 1840 simulations based on 92 scenarios across seven domains (e.g., healthcare, finance, education)
Our experiments show that state-of-the-art LLMs, both proprietary and open-sourced, exhibit safety risks in over 50% cases.
arXiv Detail & Related papers (2024-09-24T19:47:21Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Generative AI Models: Opportunities and Risks for Industry and Authorities [1.3914994102950027]
Generative AI models are capable of performing a wide range of tasks that traditionally require creativity and human understanding.
They learn patterns from existing data during training and can subsequently generate new content.
The use of generative AI models introduces novel IT security risks that need to be considered.
arXiv Detail & Related papers (2024-06-07T08:34:30Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers [3.4568218861862556]
This paper presents the Consequence-Mechanism-Risk framework to identify risks to workers from AI-mediated enterprise knowledge access systems.
We have drawn on wide-ranging literature detailing risks to workers, and categorised risks as being to worker value, power, and wellbeing.
Future work could apply this framework to other technological systems to promote the protection of workers and other groups.
arXiv Detail & Related papers (2023-12-08T17:05:40Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Learning Risk-Aware Quadrupedal Locomotion using Distributional Reinforcement Learning [12.156082576280955]
Deployment in hazardous environments requires robots to understand the risks associated with their actions and movements to prevent accidents.
We propose a risk sensitive locomotion training method employing distributional reinforcement learning to consider safety explicitly.
We show emergent risk sensitive locomotion behavior in simulation and on the quadrupedal robot ANYmal.
arXiv Detail & Related papers (2023-09-25T16:05:32Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - Collaborative AI Needs Stronger Assurances Driven by Risks [5.657409854809804]
Collaborative AI systems (CAISs) aim at working together with humans in a shared space to achieve a common goal.
Building such systems with strong assurances of compliance with requirements, domain-specific standards and regulations is of greatest importance.
arXiv Detail & Related papers (2021-12-01T15:24:21Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.