From decision aiding to the massive use of algorithms: where does the responsibility stand?
- URL: http://arxiv.org/abs/2406.13140v1
- Date: Wed, 19 Jun 2024 01:10:34 GMT
- Title: From decision aiding to the massive use of algorithms: where does the responsibility stand?
- Authors: Odile Bellenguez, Nadia Branuer, Alexis Tsoukiàs,
- Abstract summary: We show how the fact they cannot embrace the full situations of use and consequences lead to an unreachable limit.
On the other hand, using technology is never free of responsibility, even if there also exist limits to characterise.
The article is structured in such a way as to show how the limits have gradually evolved, leaving unthought of issues and a failure to share responsibility.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the very large debates on ethics of algorithms, this paper proposes an analysis on human responsibility. On one hand, algorithms are designed by some humans, who bear a part of responsibility in the results and unexpected impacts. Nevertheless, we show how the fact they cannot embrace the full situations of use and consequences lead to an unreachable limit. On the other hand, using technology is never free of responsibility, even if there also exist limits to characterise. Massive uses by unprofessional users introduce additional questions that modify the possibilities to be ethically responsible. The article is structured in such a way as to show how the limits have gradually evolved, leaving unthought of issues and a failure to share responsibility.
Related papers
- Beware of "Explanations" of AI [16.314859121110945]
Understanding the decisions made and actions taken by increasingly complex AI system remains a key challenge.
This has led to an expanding field of research in explainable artificial intelligence (XAI)
The question of what constitutes a "good" explanation is dependent on the goals, stakeholders, and context.
arXiv Detail & Related papers (2025-04-09T11:31:08Z) - A theory of appropriateness with applications to generative artificial intelligence [56.23261221948216]
We need to understand how appropriateness guides human decision making in order to properly evaluate AI decision making and improve it.
This paper presents a theory of appropriateness: how it functions in human society, how it may be implemented in the brain, and what it means for responsible deployment of generative AI technology.
arXiv Detail & Related papers (2024-12-26T00:54:03Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - The Illusion of Competence: Evaluating the Effect of Explanations on Users' Mental Models of Visual Question Answering Systems [6.307898834231964]
We examine how users perceive the limitations of an AI system when it encounters a task that it cannot perform perfectly.
We employ a visual question answer and explanation task where we control the AI system's limitations by manipulating the visual inputs.
Our goal is to determine whether participants can perceive the limitations of the system.
arXiv Detail & Related papers (2024-06-27T13:44:03Z) - What's my role? Modelling responsibility for AI-based safety-critical
systems [1.0549609328807565]
It is difficult for developers and manufacturers to be held responsible for harmful behaviour of an AI-SCS.
A human operator can become a "liability sink" absorbing blame for the consequences of AI-SCS outputs they weren't responsible for creating.
This paper considers different senses of responsibility (role, moral, legal and causal), and how they apply in the context of AI-SCS safety.
arXiv Detail & Related papers (2023-12-30T13:45:36Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Mitigating Covertly Unsafe Text within Natural Language Systems [55.26364166702625]
Uncontrolled systems may generate recommendations that lead to injury or life-threatening consequences.
In this paper, we distinguish types of text that can lead to physical harm and establish one particularly underexplored category: covertly unsafe text.
arXiv Detail & Related papers (2022-10-17T17:59:49Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Responsibility: An Example-based Explainable AI approach via Training
Process Inspection [1.4610038284393165]
We present a novel XAI approach that identifies the most responsible training example for a particular decision.
This example can then be shown as an explanation: "this is what I (the AI) learned that led me to do that"
Our results demonstrate that responsibility can help improve accuracy for both human end users and secondary ML models.
arXiv Detail & Related papers (2022-09-07T19:30:01Z) - Transcending XAI Algorithm Boundaries through End-User-Inspired Design [27.864338632191608]
Lacking explainability-focused functional support for end users may hinder the safe and responsible use of AI in high-stakes domains.
Our work shows that grounding the technical problem in end users' use of XAI can inspire new research questions.
Such end-user-inspired research questions have the potential to promote social good by democratizing AI and ensuring the responsible use of AI in critical domains.
arXiv Detail & Related papers (2022-08-18T09:44:51Z) - The Conflict Between Explainable and Accountable Decision-Making
Algorithms [10.64167691614925]
Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired.
XAI initiative aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability.
This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems.
arXiv Detail & Related papers (2022-05-11T07:19:28Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.