Moral Dilemmas for Moral Machines
- URL: http://arxiv.org/abs/2203.06152v1
- Date: Fri, 11 Mar 2022 18:24:39 GMT
- Title: Moral Dilemmas for Moral Machines
- Authors: Travis LaCroix
- Abstract summary: I argue that this is a misapplication of philosophical thought experiments because it fails to appreciate the purpose of moral dilemmas.
There are uses of moral dilemmas in machine ethics that are appropriate and the novel situations that arise in a machine-learning context can shed some light on philosophical work in ethics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous systems are being developed and deployed in situations that may
require some degree of ethical decision-making ability. As a result, research
in machine ethics has proliferated in recent years. This work has included
using moral dilemmas as validation mechanisms for implementing decision-making
algorithms in ethically-loaded situations. Using trolley-style problems in the
context of autonomous vehicles as a case study, I argue (1) that this is a
misapplication of philosophical thought experiments because (2) it fails to
appreciate the purpose of moral dilemmas, and (3) this has potentially
catastrophic consequences; however, (4) there are uses of moral dilemmas in
machine ethics that are appropriate and the novel situations that arise in a
machine-learning context can shed some light on philosophical work in ethics.
Related papers
- Exploring and steering the moral compass of Large Language Models [55.2480439325792]
Large Language Models (LLMs) have become central to advancing automation and decision-making across various sectors.
This study proposes a comprehensive comparative analysis of the most advanced LLMs to assess their moral profiles.
arXiv Detail & Related papers (2024-05-27T16:49:22Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Ethical Frameworks and Computer Security Trolley Problems: Foundations
for Conversations [14.120888473204907]
We make and explore connections between moral questions in computer security research and ethics / moral philosophy.
We do not seek to define what is morally right or wrong, nor do we argue for one framework over another.
arXiv Detail & Related papers (2023-02-28T05:39:17Z) - On the Computational Complexity of Ethics: Moral Tractability for Minds
and Machines [0.0]
Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities.
This paper explores what kind of moral machines are possible based on what computational systems can or cannot do.
arXiv Detail & Related papers (2023-02-08T17:39:58Z) - ClarifyDelphi: Reinforced Clarification Questions with Defeasibility
Rewards for Social and Moral Situations [81.70195684646681]
We present ClarifyDelphi, an interactive system that learns to ask clarification questions.
We posit that questions whose potential answers lead to diverging moral judgments are the most informative.
Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition.
arXiv Detail & Related papers (2022-12-20T16:33:09Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Automated Kantian Ethics: A Faithful Implementation [0.0]
I present an implementation of automated Kantian ethics that is faithful to the Kantian philosophical tradition.
I formalize Kant's categorical imperative in Dyadic Deontic Logic, implement this formalization in the Isabelle theorem prover, and develop a testing framework to evaluate how well my implementation coheres with expected properties of Kantian ethic.
My system is an early step towards philosophically mature ethical AI agents and it can make nuanced judgements in complex ethical dilemmas because it is grounded in philosophical literature.
arXiv Detail & Related papers (2022-07-20T19:09:15Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - A Word on Machine Ethics: A Response to Jiang et al. (2021) [36.955224006838584]
We focus on a single case study of the recently proposed Delphi model and offer a critique of the project's proposed method of automating morality judgments.
We conclude with a discussion of how machine ethics could usefully proceed, by focusing on current and near-future uses of technology.
arXiv Detail & Related papers (2021-11-07T19:31:51Z) - Reinforcement Learning Under Moral Uncertainty [13.761051314923634]
An ambitious goal for machine learning is to create agents that behave ethically.
While ethical agents could be trained by rewarding correct behavior under a specific moral theory, there remains widespread disagreement about the nature of morality.
This paper proposes two training methods that realize different points among competing desiderata, and trains agents in simple environments to act under moral uncertainty.
arXiv Detail & Related papers (2020-06-08T16:40:12Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.