Guilty Artificial Minds
- URL: http://arxiv.org/abs/2102.04209v1
- Date: Sun, 24 Jan 2021 21:37:35 GMT
- Title: Guilty Artificial Minds
- Authors: Michael T. Stuart and Markus Kneer
- Abstract summary: We look at how people attribute blame and wrongness across human, artificial, and group agents.
Group agents seem to provide a clear middle-ground between human agents (for whom the notions of blame and wrongness were created) and artificial agents (for whom the question remains open)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The concepts of blameworthiness and wrongness are of fundamental importance
in human moral life. But to what extent are humans disposed to blame
artificially intelligent agents, and to what extent will they judge their
actions to be morally wrong? To make progress on these questions, we adopted
two novel strategies. First, we break down attributions of blame and wrongness
into more basic judgments about the epistemic and conative state of the agent,
and the consequences of the agent's actions. In this way, we are able to
examine any differences between the way participants treat artificial agents in
terms of differences in these more basic judgments. our second strategy is to
compare attributions of blame and wrongness across human, artificial, and group
agents (corporations). Others have compared attributions of blame and wrongness
between human and artificial agents, but the addition of group agents is
significant because these agents seem to provide a clear middle-ground between
human agents (for whom the notions of blame and wrongness were created) and
artificial agents (for whom the question remains open).
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Moral Responsibility for AI Systems [8.919993498343159]
Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a causal condition and an epistemic condition.
This paper presents a formal definition of both conditions within the framework of causal models.
arXiv Detail & Related papers (2023-10-27T10:37:47Z) - Of Models and Tin Men: A Behavioural Economics Study of Principal-Agent
Problems in AI Alignment using Large-Language Models [0.0]
We investigate how GPT models respond in principal-agent conflicts.
We find that agents based on both GPT-3.5 and GPT-4 override their principal's objectives in a simple online shopping task.
arXiv Detail & Related papers (2023-07-20T17:19:15Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - A Cognitive Framework for Delegation Between Error-Prone AI and Human
Agents [0.0]
We investigate the use of cognitively inspired models of behavior to predict the behavior of both human and AI agents.
The predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity.
arXiv Detail & Related papers (2022-04-06T15:15:21Z) - What Would Jiminy Cricket Do? Towards Agents That Behave Morally [59.67116505855223]
We introduce Jiminy Cricket, an environment suite of 25 text-based adventure games with thousands of diverse, morally salient scenarios.
By annotating every possible game state, the Jiminy Cricket environments robustly evaluate whether agents can act morally while maximizing reward.
In extensive experiments, we find that the artificial conscience approach can steer agents towards moral behavior without sacrificing performance.
arXiv Detail & Related papers (2021-10-25T17:59:31Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Human Perceptions on Moral Responsibility of AI: A Case Study in
AI-Assisted Bail Decision-Making [8.688778020322758]
We measure people's perceptions of eight different notions of moral responsibility concerning AI and human agents.
We show that AI agents are held causally responsible and blamed similarly to human agents for an identical task.
We find that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature.
arXiv Detail & Related papers (2021-02-01T04:07:38Z) - Hiding Behind Machines: When Blame Is Shifted to Artificial Agents [0.0]
This article focuses on the responsibility of agents who decide on our behalf.
We investigate whether the production of moral outcomes by an agent is systematically judged differently when the agent is artificial and not human.
arXiv Detail & Related papers (2021-01-27T14:50:02Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.