Normative Epistemology for Lethal Autonomous Weapons Systems
- URL: http://arxiv.org/abs/2110.12935v1
- Date: Mon, 25 Oct 2021 13:13:42 GMT
- Title: Normative Epistemology for Lethal Autonomous Weapons Systems
- Authors: Susannah Kate Devitt
- Abstract summary: This chapter discusses higher-order design principles to guide the design, evaluation, deployment, and iteration of Lethal Autonomous Weapons Systems.
Epistemic models consider the role of accuracy, likelihoods, beliefs, competencies, capabilities, context, and luck in the justification of actions and attribution of knowledge.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of human-information systems, cybernetic systems, and increasingly
autonomous systems requires the application of epistemic frameworks to machines
and human-machine teams. This chapter discusses higher-order design principles
to guide the design, evaluation, deployment, and iteration of Lethal Autonomous
Weapons Systems (LAWS) based on epistemic models. Epistemology is the study of
knowledge. Epistemic models consider the role of accuracy, likelihoods,
beliefs, competencies, capabilities, context, and luck in the justification of
actions and the attribution of knowledge. The aim is not to provide ethical
justification for or against LAWS, but to illustrate how epistemological
frameworks can be used in conjunction with moral apparatus to guide the design
and deployment of future systems. The models discussed in this chapter aim to
make Article 36 reviews of LAWS systematic, expedient, and evaluable. A
Bayesian virtue epistemology is proposed to enable justified actions under
uncertainty that meet the requirements of the Laws of Armed Conflict and
International Humanitarian Law. Epistemic concepts can provide some of the
apparatus to meet explainability and transparency requirements in the
development, evaluation, deployment, and review of ethical AI.
Related papers
- Deontic Temporal Logic for Formal Verification of AI Ethics [4.028503203417233]
This paper proposes a formalization based on deontic logic to define and evaluate the ethical behavior of AI systems.
It introduces axioms and theorems to capture ethical requirements related to fairness and explainability.
The authors evaluate the effectiveness of this formalization by assessing the ethics of the real-world COMPAS and loan prediction AI systems.
arXiv Detail & Related papers (2025-01-10T07:48:40Z) - Delegating Responsibilities to Intelligent Autonomous Systems: Challenges and Benefits [1.7205106391379026]
As AI systems operate with autonomy and adaptability, the traditional boundaries of moral responsibility in techno-social systems are being challenged.
This paper explores the evolving discourse on the delegation of responsibilities to intelligent autonomous agents and the ethical implications of such practices.
arXiv Detail & Related papers (2024-11-06T18:40:38Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Levels of AGI for Operationalizing Progress on the Path to AGI [64.59151650272477]
We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors.
This framework introduces levels of AGI performance, generality, and autonomy, providing a common language to compare models, assess risks, and measure progress along the path to AGI.
arXiv Detail & Related papers (2023-11-04T17:44:58Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Ethics in conversation: Building an ethics assurance case for autonomous
AI-enabled voice agents in healthcare [1.8964739087256175]
The principles-based ethics assurance argument pattern is one proposal in the AI ethics landscape.
This paper presents the interim findings of a case study applying this ethics assurance framework to the use of Dora, an AI-based telemedicine system.
arXiv Detail & Related papers (2023-05-23T16:04:59Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Explanation Ontology: A Model of Explanations for User-Centered AI [3.1783442097247345]
Explanations have often added to an AI system in a non-principled, post-hoc manner.
With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration.
We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types.
arXiv Detail & Related papers (2020-10-04T03:53:35Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.