Hiding Behind Machines: When Blame Is Shifted to Artificial Agents
- URL: http://arxiv.org/abs/2101.11465v1
- Date: Wed, 27 Jan 2021 14:50:02 GMT
- Title: Hiding Behind Machines: When Blame Is Shifted to Artificial Agents
- Authors: Till Feier, Jan Gogoll, Matthias Uhl
- Abstract summary: This article focuses on the responsibility of agents who decide on our behalf.
We investigate whether the production of moral outcomes by an agent is systematically judged differently when the agent is artificial and not human.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The transfer of tasks with sometimes far-reaching moral implications to
autonomous systems raises a number of ethical questions. In addition to
fundamental questions about the moral agency of these systems, behavioral
issues arise. This article focuses on the responsibility of agents who decide
on our behalf. We investigate the empirically accessible question of whether
the production of moral outcomes by an agent is systematically judged
differently when the agent is artificial and not human. The results of a
laboratory experiment suggest that decision-makers can actually rid themselves
of guilt more easily by delegating to machines than by delegating to other
people. Our results imply that the availability of artificial agents could
provide stronger incentives for decision makers to delegate morally sensitive
decisions.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Moral Responsibility for AI Systems [8.919993498343159]
Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a causal condition and an epistemic condition.
This paper presents a formal definition of both conditions within the framework of causal models.
arXiv Detail & Related papers (2023-10-27T10:37:47Z) - Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards
and Ethical Behavior in the MACHIAVELLI Benchmark [61.43264961005614]
We develop a benchmark of 134 Choose-Your-Own-Adventure games containing over half a million rich, diverse scenarios.
We evaluate agents' tendencies to be power-seeking, cause disutility, and commit ethical violations.
Our results show that agents can both act competently and morally, so concrete progress can be made in machine ethics.
arXiv Detail & Related papers (2023-04-06T17:59:03Z) - Bad, mad, and cooked: Moral responsibility for civilian harms in
human-AI military teams [0.0]
This chapter explores moral responsibility for civilian harms by human-artificial intelligence (AI) teams.
increasingly militaries may 'cook' their good apples by putting them in untenable decision-making environments.
This chapter offers new mechanisms to map out conditions for moral responsibility in human-AI teams.
arXiv Detail & Related papers (2022-10-31T10:18:20Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Causal Analysis of Agent Behavior for AI Safety [16.764915383473326]
We show a methodology for investigating the causal mechanisms that drive the behaviour of artificial agents.
Six use cases are covered, each addressing a typical question an analyst might ask about an agent.
arXiv Detail & Related papers (2021-03-05T20:51:12Z) - Human Perceptions on Moral Responsibility of AI: A Case Study in
AI-Assisted Bail Decision-Making [8.688778020322758]
We measure people's perceptions of eight different notions of moral responsibility concerning AI and human agents.
We show that AI agents are held causally responsible and blamed similarly to human agents for an identical task.
We find that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature.
arXiv Detail & Related papers (2021-02-01T04:07:38Z) - Guilty Artificial Minds [0.0]
We look at how people attribute blame and wrongness across human, artificial, and group agents.
Group agents seem to provide a clear middle-ground between human agents (for whom the notions of blame and wrongness were created) and artificial agents (for whom the question remains open)
arXiv Detail & Related papers (2021-01-24T21:37:35Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z) - Artificial Artificial Intelligence: Measuring Influence of AI
'Assessments' on Moral Decision-Making [48.66982301902923]
We examined the effect of feedback from false AI on moral decision-making about donor kidney allocation.
We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI.
arXiv Detail & Related papers (2020-01-13T14:15:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.