Unravelling Responsibility for AI
- URL: http://arxiv.org/abs/2308.02608v2
- Date: Wed, 8 May 2024 14:37:59 GMT
- Title: Unravelling Responsibility for AI
- Authors: Zoe Porter, Philippa Ryan, Phillip Morgan, Joanna Al-Qaddoumi, Bernard Twomey, John McDermid, Ibrahim Habli,
- Abstract summary: It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems.
This paper draws upon central distinctions in philosophy and law to clarify the concept of responsibility for AI.
- Score: 0.8836921728313208
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems. But without a clear and precise understanding of what "responsibility" means, deliberations about where responsibility lies will be, at best, unfocused and incomplete and, at worst, misguided. To address this concern, this paper draws upon central distinctions in philosophy and law to clarify the concept of responsibility for AI for policymakers, practitioners, researchers and students from non-philosophical and non-legal backgrounds. Taking the three-part formulation "Actor A is responsible for Occurrence O," the paper unravels the concept of responsibility to clarify that there are different possibilities of who is responsible for AI, the senses in which they are responsible, and aspects of events they are responsible for. Criteria and conditions for fitting attributions of responsibility in the core senses (causal responsibility, role-responsibility, liability responsibility and moral responsibility) are articulated to promote an understanding of when responsibility attributions would be inappropriate or unjust. The analysis is presented with a graphical notation to facilitate informal diagrammatic reasoning and discussion about specific cases. It is illustrated by application to a scenario of a fatal collision between an autonomous AI-enabled ship and a traditional, crewed vessel at sea.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - The Odyssey of Commonsense Causality: From Foundational Benchmarks to Cutting-Edge Reasoning [70.16523526957162]
Understanding commonsense causality helps people understand the principles of the real world better.
Despite its significance, a systematic exploration of this topic is notably lacking.
Our work aims to provide a systematic overview, update scholars on recent advancements, and provide a pragmatic guide for beginners.
arXiv Detail & Related papers (2024-06-27T16:30:50Z) - Attributing Responsibility in AI-Induced Incidents: A Computational Reflective Equilibrium Framework for Accountability [13.343937277604892]
The pervasive integration of Artificial Intelligence (AI) has introduced complex challenges in the responsibility and accountability in the event of incidents involving AI-enabled systems.
This work proposes a coherent and ethically acceptable responsibility attribution framework for all stakeholders.
arXiv Detail & Related papers (2024-04-25T18:11:03Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - What's my role? Modelling responsibility for AI-based safety-critical
systems [1.0549609328807565]
It is difficult for developers and manufacturers to be held responsible for harmful behaviour of an AI-SCS.
A human operator can become a "liability sink" absorbing blame for the consequences of AI-SCS outputs they weren't responsible for creating.
This paper considers different senses of responsibility (role, moral, legal and causal), and how they apply in the context of AI-SCS safety.
arXiv Detail & Related papers (2023-12-30T13:45:36Z) - Responsibility in Extensive Form Games [1.4104545468525629]
Two different forms of responsibility, counterfactual and seeing-to-it, have been extensively discussed in the philosophy and AI.
This paper proposes a definition of seeing-to-it responsibility for such settings that amalgamate the two modalities.
It shows that although these two forms of responsibility are not enough to ascribe responsibility in each possible situation, this gap does not exist if higher-order responsibility is taken into account.
arXiv Detail & Related papers (2023-12-12T10:41:17Z) - Ethical Considerations and Policy Implications for Large Language
Models: Guiding Responsible Development and Deployment [48.72819550642584]
This paper examines the ethical considerations and implications of large language models (LLMs) in generating content.
It highlights the potential for both positive and negative uses of generative AI programs and explores the challenges in assigning responsibility for their outputs.
arXiv Detail & Related papers (2023-08-01T07:21:25Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Responsible AI and Its Stakeholders [14.129366395072026]
We discuss three notions of responsibility (i.e., blameworthiness, accountability, and liability) for all stakeholders, including AI, and suggest the roles of jurisdiction and the general public in this matter.
arXiv Detail & Related papers (2020-04-23T19:27:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.