Uncertain Machine Ethics Planning
- URL: http://arxiv.org/abs/2505.04352v1
- Date: Wed, 07 May 2025 12:03:15 GMT
- Title: Uncertain Machine Ethics Planning
- Authors: Simon Kolker, Louise A. Dennis, Ramon Fraga Pereira, Mengwei Xu,
- Abstract summary: Machine Ethics decisions should consider the implications of uncertainty over decisions.<n>The evaluation of outcomes may invoke one or more moral theories, which might have conflicting judgements.<n>We formalise the problem as a Multi-Moral Shortest Path Problem using Sven-Ove Hansson's Hypothetical Retrospection procedure.
- Score: 6.10614292605722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Ethics decisions should consider the implications of uncertainty over decisions. Decisions should be made over sequences of actions to reach preferable outcomes long term. The evaluation of outcomes, however, may invoke one or more moral theories, which might have conflicting judgements. Each theory will require differing representations of the ethical situation. For example, Utilitarianism measures numerical values, Deontology analyses duties, and Virtue Ethics emphasises moral character. While balancing potentially conflicting moral considerations, decisions may need to be made, for example, to achieve morally neutral goals with minimal costs. In this paper, we formalise the problem as a Multi-Moral Markov Decision Process and a Multi-Moral Stochastic Shortest Path Problem. We develop a heuristic algorithm based on Multi-Objective AO*, utilising Sven-Ove Hansson's Hypothetical Retrospection procedure for ethical reasoning under uncertainty. Our approach is validated by a case study from Machine Ethics literature: the problem of whether to steal insulin for someone who needs it.
Related papers
- Are Language Models Consequentialist or Deontological Moral Reasoners? [69.85385952436044]
We focus on a large-scale analysis of the moral reasoning traces provided by large language models (LLMs)<n>We introduce and test a taxonomy of moral rationales to systematically classify reasoning traces according to two main normative ethical theories: consequentialism and deontology.
arXiv Detail & Related papers (2025-05-27T17:51:18Z) - Exploring and steering the moral compass of Large Language Models [55.2480439325792]
Large Language Models (LLMs) have become central to advancing automation and decision-making across various sectors.
This study proposes a comprehensive comparative analysis of the most advanced LLMs to assess their moral profiles.
arXiv Detail & Related papers (2024-05-27T16:49:22Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Uncertain Machine Ethical Decisions Using Hypothetical Retrospection [8.064201367978066]
We propose the use of the hypothetical retrospection argumentation procedure, developed by Sven Ove Hansson.
Actions are represented with a branching set of potential outcomes, each with a state, utility, and either a numeric or poetic probability estimate.
We introduce a preliminary framework that seems to meet the varied requirements of a machine ethics system.
arXiv Detail & Related papers (2023-05-02T13:54:04Z) - Ethical Frameworks and Computer Security Trolley Problems: Foundations
for Conversations [14.120888473204907]
We make and explore connections between moral questions in computer security research and ethics / moral philosophy.
We do not seek to define what is morally right or wrong, nor do we argue for one framework over another.
arXiv Detail & Related papers (2023-02-28T05:39:17Z) - On the Computational Complexity of Ethics: Moral Tractability for Minds
and Machines [0.0]
Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities.
This paper explores what kind of moral machines are possible based on what computational systems can or cannot do.
arXiv Detail & Related papers (2023-02-08T17:39:58Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - AiSocrates: Towards Answering Ethical Quandary Questions [51.53350252548668]
AiSocrates is a system for deliberative exchange of different perspectives to an ethical quandary.
We show that AiSocrates generates promising answers to ethical quandary questions with multiple perspectives.
We argue that AiSocrates is a promising step toward developing an NLP system that incorporates human values explicitly by prompt instructions.
arXiv Detail & Related papers (2022-05-12T09:52:59Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Moral Dilemmas for Moral Machines [0.0]
I argue that this is a misapplication of philosophical thought experiments because it fails to appreciate the purpose of moral dilemmas.
There are uses of moral dilemmas in machine ethics that are appropriate and the novel situations that arise in a machine-learning context can shed some light on philosophical work in ethics.
arXiv Detail & Related papers (2022-03-11T18:24:39Z) - Have a break from making decisions, have a MARS: The Multi-valued Action
Reasoning System [0.0]
Multi-valued Action Reasoning System (MARS) is an automated value-based ethical decision-making model for artificial agents (AI)
Given a set of available actions and an underlying moral paradigm, by employing MARS one can identify the ethically preferred action.
arXiv Detail & Related papers (2021-09-07T18:44:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.