Commercial AI, Conflict, and Moral Responsibility: A theoretical
analysis and practical approach to the moral responsibilities associated with
dual-use AI technology
- URL: http://arxiv.org/abs/2402.01762v1
- Date: Tue, 30 Jan 2024 18:09:45 GMT
- Title: Commercial AI, Conflict, and Moral Responsibility: A theoretical
analysis and practical approach to the moral responsibilities associated with
dual-use AI technology
- Authors: Daniel Trusilo and David Danks
- Abstract summary: We argue that stakeholders involved in the AI system lifecycle are morally responsible for uses of their systems that are reasonably foreseeable.
We present three technically feasible actions that developers of civilian AIs can take to potentially mitigate their moral responsibility.
- Score: 2.050345881732981
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a theoretical analysis and practical approach to the
moral responsibilities when developing AI systems for non-military applications
that may nonetheless be used for conflict applications. We argue that AI
represents a form of crossover technology that is different from previous
historical examples of dual- or multi-use technology as it has a multiplicative
effect across other technologies. As a result, existing analyses of ethical
responsibilities around dual-use technologies do not necessarily work for AI
systems. We instead argue that stakeholders involved in the AI system lifecycle
are morally responsible for uses of their systems that are reasonably
foreseeable. The core idea is that an agent's moral responsibility for some
action is not necessarily determined by their intentions alone; we must also
consider what the agent could reasonably have foreseen to be potential outcomes
of their action, such as the potential use of a system in conflict even when it
is not designed for that. In particular, we contend that it is reasonably
foreseeable that: (1) civilian AI systems will be applied to active conflict,
including conflict support activities, (2) the use of civilian AI systems in
conflict will impact applications of the law of armed conflict, and (3)
crossover AI technology will be applied to conflicts that fall short of armed
conflict. Given these reasonably foreseeably outcomes, we present three
technically feasible actions that developers of civilian AIs can take to
potentially mitigate their moral responsibility: (a) establishing systematic
approaches to multi-perspective capability testing, (b) integrating digital
watermarking in model weight matrices, and (c) utilizing monitoring and
reporting mechanisms for conflict-related AI applications.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Don't Kill the Baby: The Case for AI in Arbitration [0.0]
Article argues that the FAA allows parties to contractually choose AI-driven arbitration, despite traditional reservations.
By advocating for the use of AI in arbitration, it underscores the importance of respecting contractual autonomy.
Ultimately, it calls for a balanced, open-minded approach to AI in arbitration, recognizing its potential to enhance the efficiency, fairness, and flexibility of dispute resolution.
arXiv Detail & Related papers (2024-08-21T13:34:20Z) - The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems [0.0]
There still exists a gap between principles and practices in AI ethics.
One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope.
arXiv Detail & Related papers (2024-07-07T12:16:01Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - AI Deception: A Survey of Examples, Risks, and Potential Solutions [20.84424818447696]
This paper argues that a range of current AI systems have learned how to deceive humans.
We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth.
arXiv Detail & Related papers (2023-08-28T17:59:35Z) - Beneficent Intelligence: A Capability Approach to Modeling Benefit,
Assistance, and Associated Moral Failures through AI Systems [12.239090962956043]
The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals.
We present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders.
arXiv Detail & Related papers (2023-08-01T22:38:14Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Relational Artificial Intelligence [5.5586788751870175]
Even though AI is traditionally associated with rational decision making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective.
A rational approach to AI, where computational algorithms drive decision making independent of human intervention, has shown to result in bias and exclusion.
A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI.
arXiv Detail & Related papers (2022-02-04T15:29:57Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.