If our aim is to build morality into an artificial agent, how might we
begin to go about doing so?
- URL: http://arxiv.org/abs/2310.08295v1
- Date: Thu, 12 Oct 2023 12:56:12 GMT
- Title: If our aim is to build morality into an artificial agent, how might we
begin to go about doing so?
- Authors: Reneira Seeamber and Cosmin Badea
- Abstract summary: We discuss the different aspects that should be considered when building moral agents, including the most relevant moral paradigms and challenges.
We propose solutions including a hybrid approach to design and a hierarchical approach to combining moral paradigms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As Artificial Intelligence (AI) becomes pervasive in most fields, from
healthcare to autonomous driving, it is essential that we find successful ways
of building morality into our machines, especially for decision-making.
However, the question of what it means to be moral is still debated,
particularly in the context of AI. In this paper, we highlight the different
aspects that should be considered when building moral agents, including the
most relevant moral paradigms and challenges. We also discuss the top-down and
bottom-up approaches to design and the role of emotion and sentience in
morality. We then propose solutions including a hybrid approach to design and a
hierarchical approach to combining moral paradigms. We emphasize how governance
and policy are becoming ever more critical in AI Ethics and in ensuring that
the tasks we set for moral agents are attainable, that ethical behavior is
achieved, and that we obtain good AI.
Related papers
- Can Artificial Intelligence Embody Moral Values? [0.0]
neutrality thesis holds that technology cannot be laden with values.
In this paper, we argue that artificial intelligence, particularly artificial agents that autonomously make decisions to pursue their goals, challenge the neutrality thesis.
Our central claim is that the computational models underlying artificial agents can integrate representations of moral values such as fairness, honesty and avoiding harm.
arXiv Detail & Related papers (2024-08-22T09:39:16Z) - Why should we ever automate moral decision making? [30.428729272730727]
Concerns arise when AI is involved in decisions with significant moral implications.
Moral reasoning lacks a broadly accepted framework.
An alternative approach involves AI learning from human moral decisions.
arXiv Detail & Related papers (2024-07-10T13:59:22Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Towards Theory-based Moral AI: Moral AI with Aggregating Models Based on
Normative Ethical Theory [7.412445894287708]
Moral AI has been studied in the fields of philosophy and artificial intelligence.
Recent developments in AI have made it increasingly necessary to implement AI with morality.
arXiv Detail & Related papers (2023-06-20T10:22:24Z) - Beyond Bias and Compliance: Towards Individual Agency and Plurality of
Ethics in AI [0.0]
We argue that the way data is labeled plays an essential role in the way AI behaves.
We propose an alternative path that allows for the plurality of values and the freedom of individual expression.
arXiv Detail & Related papers (2023-02-23T16:33:40Z) - Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement
Learning [4.2050490361120465]
A bottom-up learning approach may be more appropriate for studying and developing ethical behavior in AI agents.
We present a systematic analysis of the choices made by intrinsically-motivated RL agents whose rewards are based on moral theories.
We analyze the impact of different types of morality on the emergence of cooperation, defection or exploitation.
arXiv Detail & Related papers (2023-01-20T09:36:42Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.