Responsible AI and Its Stakeholders
- URL: http://arxiv.org/abs/2004.11434v1
- Date: Thu, 23 Apr 2020 19:27:19 GMT
- Title: Responsible AI and Its Stakeholders
- Authors: Gabriel Lima, Meeyoung Cha
- Abstract summary: We discuss three notions of responsibility (i.e., blameworthiness, accountability, and liability) for all stakeholders, including AI, and suggest the roles of jurisdiction and the general public in this matter.
- Score: 14.129366395072026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Responsible Artificial Intelligence (AI) proposes a framework that holds all
stakeholders involved in the development of AI to be responsible for their
systems. It, however, fails to accommodate the possibility of holding AI
responsible per se, which could close some legal and moral gaps concerning the
deployment of autonomous and self-learning systems. We discuss three notions of
responsibility (i.e., blameworthiness, accountability, and liability) for all
stakeholders, including AI, and suggest the roles of jurisdiction and the
general public in this matter.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - AI Governance and Accountability: An Analysis of Anthropic's Claude [0.0]
This paper examines the AI governance landscape, focusing on Anthropic's Claude, a foundational AI model.
We analyze Claude through the lens of the NIST AI Risk Management Framework and the EU AI Act, identifying potential threats and proposing mitigation strategies.
arXiv Detail & Related papers (2024-05-02T23:37:06Z) - Responsible AI Research Needs Impact Statements Too [51.37368267352821]
Work in responsible artificial intelligence (RAI), ethical AI, or ethics in AI is no exception.
Work in responsible artificial intelligence (RAI), ethical AI, or ethics in AI is no exception.
arXiv Detail & Related papers (2023-11-20T14:02:28Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Unravelling Responsibility for AI [0.8836921728313208]
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems.
This paper draws upon central distinctions in philosophy and law to clarify the concept of responsibility for AI.
arXiv Detail & Related papers (2023-08-04T13:12:17Z) - Connecting the Dots in Trustworthy Artificial Intelligence: From AI
Principles, Ethics, and Key Requirements to Responsible AI Systems and
Regulation [22.921683578188645]
We argue that attaining truly trustworthy AI concerns the trustworthiness of all processes and actors that are part of the system's life cycle.
A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, and a risk-based approach to AI regulation.
Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI.
arXiv Detail & Related papers (2023-05-02T09:49:53Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - AI and Ethics -- Operationalising Responsible AI [13.781989627894813]
Building and maintaining public trust in AI has been identified as the key to successful and sustainable innovation.
This chapter discusses the challenges related to operationalizing ethical AI principles and presents an integrated view that covers high-level ethical AI principles.
arXiv Detail & Related papers (2021-05-19T00:55:40Z) - Human Perceptions on Moral Responsibility of AI: A Case Study in
AI-Assisted Bail Decision-Making [8.688778020322758]
We measure people's perceptions of eight different notions of moral responsibility concerning AI and human agents.
We show that AI agents are held causally responsible and blamed similarly to human agents for an identical task.
We find that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature.
arXiv Detail & Related papers (2021-02-01T04:07:38Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.