Artificial virtuous agents in a multiagent tragedy of the commons
- URL: http://arxiv.org/abs/2210.02769v1
- Date: Thu, 6 Oct 2022 09:12:41 GMT
- Title: Artificial virtuous agents in a multiagent tragedy of the commons
- Authors: Jakob Stenseke
- Abstract summary: We present the first technical implementation of artificial virtuous agents (AVAs) in moral simulations.
Results show how the AVAs learn to tackle cooperation problems while exhibiting core features of their theoretical counterpart.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although virtue ethics has repeatedly been proposed as a suitable framework
for the development of artificial moral agents (AMAs), it has been proven
difficult to approach from a computational perspective. In this work, we
present the first technical implementation of artificial virtuous agents (AVAs)
in moral simulations. First, we review previous conceptual and technical work
in artificial virtue ethics and describe a functionalistic path to AVAs based
on dispositional virtues, bottom-up learning, and top-down eudaimonic reward.
We then provide the details of a technical implementation in a moral simulation
based on a tragedy of the commons scenario. The experimental results show how
the AVAs learn to tackle cooperation problems while exhibiting core features of
their theoretical counterpart, including moral character, dispositional
virtues, learning from experience, and the pursuit of eudaimonia. Ultimately,
we argue that virtue ethics provides a compelling path toward morally excellent
machines and that our work provides an important starting point for such
endeavors.
Related papers
- On the Emergence of Symmetrical Reality [51.21203247240322]
We introduce the symmetrical reality framework, which offers a unified representation encompassing various forms of physical-virtual amalgamations.
We propose an instance of an AI-driven active assistance service that illustrates the potential applications of symmetrical reality.
arXiv Detail & Related papers (2024-01-26T16:09:39Z) - Learning Machine Morality through Experience and Interaction [3.7414804164475983]
Increasing interest in ensuring safety of next-generation Artificial Intelligence (AI) systems calls for novel approaches to embedding morality into autonomous agents.
We argue that more hybrid solutions are needed to create adaptable and robust, yet more controllable and interpretable agents.
arXiv Detail & Related papers (2023-12-04T11:46:34Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement
Learning [4.2050490361120465]
A bottom-up learning approach may be more appropriate for studying and developing ethical behavior in AI agents.
We present a systematic analysis of the choices made by intrinsically-motivated RL agents whose rewards are based on moral theories.
We analyze the impact of different types of morality on the emergence of cooperation, defection or exploitation.
arXiv Detail & Related papers (2023-01-20T09:36:42Z) - Towards Artificial Virtuous Agents: Games, Dilemmas and Machine Learning [4.864819846886143]
We show how a role-playing game can be designed to develop virtues within an artificial agent.
We motivate the implementation of virtuous agents that play such role-playing games, and the examination of their decisions through a virtue ethical lens.
arXiv Detail & Related papers (2022-08-30T07:37:03Z) - Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of
Top-Down, Bottom-Up, and Hybrid Models for Theoretical and Applied Ethics in
Artificial Intelligence [0.0]
In this meta-ethnography, we explore three different angles of ethical artificial intelligence (AI) design implementation.
The novel contribution to this framework is the political angle, which constitutes ethics in AI either being determined by corporations and governments and imposed through policies or law (coming from the top)
There is a focus on reinforcement learning as an example of a bottom-up applied technical approach and AI ethics principles as a practical top-down approach.
arXiv Detail & Related papers (2022-04-15T18:47:49Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Designing a Future Worth Wanting: Applying Virtue Ethics to HCI [11.117357750374035]
Out of the three major approaches to ethics, virtue ethics is uniquely well suited as a moral guide in the digital age.
It focuses on the traits, situations and actions of moral agents, rather than on rules (as in deontology) or outcomes (consequentialism)
arXiv Detail & Related papers (2022-04-05T14:18:35Z) - Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life
Anecdotes [72.64975113835018]
Motivated by descriptive ethics, we investigate a novel, data-driven approach to machine ethics.
We introduce Scruples, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes.
Our dataset presents a major challenge to state-of-the-art neural language models, leaving significant room for improvement.
arXiv Detail & Related papers (2020-08-20T17:34:15Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.