Toward equipping Artificial Moral Agents with multiple ethical theories
- URL: http://arxiv.org/abs/2003.00935v1
- Date: Mon, 2 Mar 2020 14:33:22 GMT
- Title: Toward equipping Artificial Moral Agents with multiple ethical theories
- Authors: George Rautenbach and C. Maria Keet
- Abstract summary: Artificial Moral Agents (AMA's) is a field in computer science with the purpose of creating autonomous machines that can make moral decisions akin to how humans do.
Research and design has been done with either none or at most one specified normative ethical theory as basis.
This is problematic because it narrows down the AMA's functional ability and versatility which in turn causes moral outcomes that a limited number of people agree with.
We design a three-layer model for general normative ethical theories that can be used to serialise the ethical views of people and businesses for an AMA to use during reasoning.
- Score: 1.5736899098702972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Moral Agents (AMA's) is a field in computer science with the
purpose of creating autonomous machines that can make moral decisions akin to
how humans do. Researchers have proposed theoretical means of creating such
machines, while philosophers have made arguments as to how these machines ought
to behave, or whether they should even exist. Of the currently theorised AMA's,
all research and design has been done with either none or at most one specified
normative ethical theory as basis. This is problematic because it narrows down
the AMA's functional ability and versatility which in turn causes moral
outcomes that a limited number of people agree with (thereby undermining an
AMA's ability to be moral in a human sense). As solution we design a
three-layer model for general normative ethical theories that can be used to
serialise the ethical views of people and businesses for an AMA to use during
reasoning. Four specific ethical norms (Kantianism, divine command theory,
utilitarianism, and egoism) were modelled and evaluated as proof of concept for
normative modelling. Furthermore, all models were serialised to XML/XSD as
proof of support for computerisation.
Related papers
- Why Machines Can't Be Moral: Turing's Halting Problem and the Moral Limits of Artificial Intelligence [0.0]
I argue that explicit ethical machines, whose moral principles are inferred through a bottom-up approach, are unable to replicate human-like moral reasoning.
By utilizing Alan Turing's theory of computation, I demonstrate that moral reasoning is computationally intractable by these machines due to the halting problem.
arXiv Detail & Related papers (2024-07-24T17:50:24Z) - Exploring and steering the moral compass of Large Language Models [55.2480439325792]
Large Language Models (LLMs) have become central to advancing automation and decision-making across various sectors.
This study proposes a comprehensive comparative analysis of the most advanced LLMs to assess their moral profiles.
arXiv Detail & Related papers (2024-05-27T16:49:22Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Towards Theory-based Moral AI: Moral AI with Aggregating Models Based on
Normative Ethical Theory [7.412445894287708]
Moral AI has been studied in the fields of philosophy and artificial intelligence.
Recent developments in AI have made it increasingly necessary to implement AI with morality.
arXiv Detail & Related papers (2023-06-20T10:22:24Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Automated Kantian Ethics: A Faithful Implementation [0.0]
I present an implementation of automated Kantian ethics that is faithful to the Kantian philosophical tradition.
I formalize Kant's categorical imperative in Dyadic Deontic Logic, implement this formalization in the Isabelle theorem prover, and develop a testing framework to evaluate how well my implementation coheres with expected properties of Kantian ethic.
My system is an early step towards philosophically mature ethical AI agents and it can make nuanced judgements in complex ethical dilemmas because it is grounded in philosophical literature.
arXiv Detail & Related papers (2022-07-20T19:09:15Z) - Does Moral Code Have a Moral Code? Probing Delphi's Moral Philosophy [5.760388205237227]
We probe the Allen AI Delphi model with a set of standardized morality questionnaires.
Despite some inconsistencies, Delphi tends to mirror the moral principles associated with the demographic groups involved in the annotation process.
arXiv Detail & Related papers (2022-05-25T13:37:56Z) - AiSocrates: Towards Answering Ethical Quandary Questions [51.53350252548668]
AiSocrates is a system for deliberative exchange of different perspectives to an ethical quandary.
We show that AiSocrates generates promising answers to ethical quandary questions with multiple perspectives.
We argue that AiSocrates is a promising step toward developing an NLP system that incorporates human values explicitly by prompt instructions.
arXiv Detail & Related papers (2022-05-12T09:52:59Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.