Bad, mad, and cooked: Moral responsibility for civilian harms in
human-AI military teams
- URL: http://arxiv.org/abs/2211.06326v3
- Date: Wed, 6 Sep 2023 11:13:14 GMT
- Title: Bad, mad, and cooked: Moral responsibility for civilian harms in
human-AI military teams
- Authors: Susannah Kate Devitt
- Abstract summary: This chapter explores moral responsibility for civilian harms by human-artificial intelligence (AI) teams.
increasingly militaries may 'cook' their good apples by putting them in untenable decision-making environments.
This chapter offers new mechanisms to map out conditions for moral responsibility in human-AI teams.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This chapter explores moral responsibility for civilian harms by
human-artificial intelligence (AI) teams. Although militaries may have some bad
apples responsible for war crimes and some mad apples unable to be responsible
for their actions during a conflict, increasingly militaries may 'cook' their
good apples by putting them in untenable decision-making environments through
the processes of replacing human decision-making with AI determinations in war
making. Responsibility for civilian harm in human-AI military teams may be
contested, risking operators becoming detached, being extreme moral witnesses,
becoming moral crumple zones or suffering moral injury from being part of
larger human-AI systems authorised by the state. Acknowledging military ethics,
human factors and AI work to date as well as critical case studies, this
chapter offers new mechanisms to map out conditions for moral responsibility in
human-AI teams. These include: 1) new decision responsibility prompts for
critical decision method in a cognitive task analysis, and 2) applying an AI
workplace health and safety framework for identifying cognitive and
psychological risks relevant to attributions of moral responsibility in
targeting decisions. Mechanisms such as these enable militaries to design
human-centred AI systems for responsible deployment.
Related papers
- Balancing Power and Ethics: A Framework for Addressing Human Rights Concerns in Military AI [0.0]
We propose a three-stage framework for evaluating human rights concerns in the design, deployment, and use of military AI.
By this framework, we aim to balance the advantages of AI in military operations with the need to protect human rights.
arXiv Detail & Related papers (2024-11-10T02:27:01Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Commercial AI, Conflict, and Moral Responsibility: A theoretical
analysis and practical approach to the moral responsibilities associated with
dual-use AI technology [2.050345881732981]
We argue that stakeholders involved in the AI system lifecycle are morally responsible for uses of their systems that are reasonably foreseeable.
We present three technically feasible actions that developers of civilian AIs can take to potentially mitigate their moral responsibility.
arXiv Detail & Related papers (2024-01-30T18:09:45Z) - Moral Responsibility for AI Systems [8.919993498343159]
Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a causal condition and an epistemic condition.
This paper presents a formal definition of both conditions within the framework of causal models.
arXiv Detail & Related papers (2023-10-27T10:37:47Z) - The Promise and Peril of Artificial Intelligence -- Violet Teaming
Offers a Balanced Path Forward [56.16884466478886]
This paper reviews emerging issues with opaque and uncontrollable AI systems.
It proposes an integrative framework called violet teaming to develop reliable and responsible AI.
It emerged from AI safety research to manage risks proactively by design.
arXiv Detail & Related papers (2023-08-28T02:10:38Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Human Perceptions on Moral Responsibility of AI: A Case Study in
AI-Assisted Bail Decision-Making [8.688778020322758]
We measure people's perceptions of eight different notions of moral responsibility concerning AI and human agents.
We show that AI agents are held causally responsible and blamed similarly to human agents for an identical task.
We find that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature.
arXiv Detail & Related papers (2021-02-01T04:07:38Z) - Hiding Behind Machines: When Blame Is Shifted to Artificial Agents [0.0]
This article focuses on the responsibility of agents who decide on our behalf.
We investigate whether the production of moral outcomes by an agent is systematically judged differently when the agent is artificial and not human.
arXiv Detail & Related papers (2021-01-27T14:50:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.