What's my role? Modelling responsibility for AI-based safety-critical
systems
- URL: http://arxiv.org/abs/2401.09459v1
- Date: Sat, 30 Dec 2023 13:45:36 GMT
- Title: What's my role? Modelling responsibility for AI-based safety-critical
systems
- Authors: Philippa Ryan, Zoe Porter, Joanna Al-Qaddoumi, John McDermid, Ibrahim
Habli
- Abstract summary: It is difficult for developers and manufacturers to be held responsible for harmful behaviour of an AI-SCS.
A human operator can become a "liability sink" absorbing blame for the consequences of AI-SCS outputs they weren't responsible for creating.
This paper considers different senses of responsibility (role, moral, legal and causal), and how they apply in the context of AI-SCS safety.
- Score: 1.0549609328807565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI-Based Safety-Critical Systems (AI-SCS) are being increasingly deployed in
the real world. These can pose a risk of harm to people and the environment.
Reducing that risk is an overarching priority during development and operation.
As more AI-SCS become autonomous, a layer of risk management via human
intervention has been removed. Following an accident it will be important to
identify causal contributions and the different responsible actors behind those
to learn from mistakes and prevent similar future events. Many authors have
commented on the "responsibility gap" where it is difficult for developers and
manufacturers to be held responsible for harmful behaviour of an AI-SCS. This
is due to the complex development cycle for AI, uncertainty in AI performance,
and dynamic operating environment. A human operator can become a "liability
sink" absorbing blame for the consequences of AI-SCS outputs they weren't
responsible for creating, and may not have understanding of.
This cross-disciplinary paper considers different senses of responsibility
(role, moral, legal and causal), and how they apply in the context of AI-SCS
safety. We use a core concept (Actor(A) is responsible for Occurrence(O)) to
create role responsibility models, producing a practical method to capture
responsibility relationships and provide clarity on the previously identified
responsibility issues. Our paper demonstrates the approach with two examples: a
retrospective analysis of the Tempe Arizona fatal collision involving an
autonomous vehicle, and a safety focused predictive role-responsibility
analysis for an AI-based diabetes co-morbidity predictor. In both examples our
primary focus is on safety, aiming to reduce unfair or disproportionate blame
being placed on operators or developers. We present a discussion and avenues
for future research.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Risk Alignment in Agentic AI Systems [0.0]
Agentic AIs capable of undertaking complex actions with little supervision raise new questions about how to safely create and align such systems with users, developers, and society.
Risk alignment will matter for user satisfaction and trust, but it will also have important ramifications for society more broadly.
We present three papers that bear on key normative and technical aspects of these questions.
arXiv Detail & Related papers (2024-10-02T18:21:08Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - A risk-based approach to assessing liability risk for AI-driven harms
considering EU liability directive [0.0]
Historical instances of harm caused by AI have led to European Union establishing an AI Liability Directive.
The future ability of provider to contest a product liability claim will depend on good practices adopted in designing, developing, and maintaining AI systems.
This paper provides a risk-based approach to examining liability for AI-driven injuries.
arXiv Detail & Related papers (2023-12-18T15:52:43Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Unravelling Responsibility for AI [0.8836921728313208]
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems.
This paper draws upon central distinctions in philosophy and law to clarify the concept of responsibility for AI.
arXiv Detail & Related papers (2023-08-04T13:12:17Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Operationalising Responsible AI Using a Pattern-Oriented Approach: A
Case Study on Chatbots in Financial Services [11.33499498841489]
Responsible AI is the practice of developing and using AI systems in a way that benefits the humans, society, and environment.
Various responsible AI principles have been released recently, but those principles are very abstract and not practical enough.
To bridge the gap, we adopt a pattern-oriented approach and build a responsible AI pattern catalogue.
arXiv Detail & Related papers (2023-01-03T23:11:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.