Culturally-Attuned Moral Machines: Implicit Learning of Human Value
Systems by AI through Inverse Reinforcement Learning
- URL: http://arxiv.org/abs/2312.17479v1
- Date: Fri, 29 Dec 2023 05:39:10 GMT
- Title: Culturally-Attuned Moral Machines: Implicit Learning of Human Value
Systems by AI through Inverse Reinforcement Learning
- Authors: Nigini Oliveira, Jasmine Li, Koosha Khalvati, Rodolfo Cortes Barragan,
Katharina Reinecke, Andrew N. Meltzoff, and Rajesh P. N. Rao
- Abstract summary: We argue that the value system of an AI should be culturally attuned.
How AI systems might acquire such codes from human observation and interaction has remained an open question.
We show that an AI agent learning from the average behavior of a particular cultural group can acquire altruistic characteristics reflective of that group's behavior.
- Score: 11.948092546676687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constructing a universal moral code for artificial intelligence (AI) is
difficult or even impossible, given that different human cultures have
different definitions of morality and different societal norms. We therefore
argue that the value system of an AI should be culturally attuned: just as a
child raised in a particular culture learns the specific values and norms of
that culture, we propose that an AI agent operating in a particular human
community should acquire that community's moral, ethical, and cultural codes.
How AI systems might acquire such codes from human observation and interaction
has remained an open question. Here, we propose using inverse reinforcement
learning (IRL) as a method for AI agents to acquire a culturally-attuned value
system implicitly. We test our approach using an experimental paradigm in which
AI agents use IRL to learn different reward functions, which govern the agents'
moral values, by observing the behavior of different cultural groups in an
online virtual world requiring real-time decision making. We show that an AI
agent learning from the average behavior of a particular cultural group can
acquire altruistic characteristics reflective of that group's behavior, and
this learned value system can generalize to new scenarios requiring altruistic
judgments. Our results provide, to our knowledge, the first demonstration that
AI agents could potentially be endowed with the ability to continually learn
their values and norms from observing and interacting with humans, thereby
becoming attuned to the culture they are operating in.
Related papers
- Modelling Human Values for AI Reasoning [2.320648715016106]
We detail a formal model of human values for their explicit computational representation.
We show how this model can provide the foundational apparatus for AI-based reasoning over values.
We propose a roadmap for future integrated, and interdisciplinary, research into human values in AI.
arXiv Detail & Related papers (2024-02-09T12:08:49Z) - Learning Human-like Representations to Enable Learning Human Values [12.628307026004656]
We argue that representational alignment between humans and AI agents facilitates value alignment.
We focus on ethics as one aspect of value alignment and train ML agents using a variety of methods.
arXiv Detail & Related papers (2023-12-21T18:31:33Z) - A computational framework of human values for ethical AI [3.5027291542274357]
values provide a means to engineer ethical AI.
No formal, computational definition of values has yet been proposed.
We address this through a formal conceptual framework rooted in the social sciences.
arXiv Detail & Related papers (2023-05-04T11:35:41Z) - Cultural Incongruencies in Artificial Intelligence [5.817158625734485]
We describe a set of cultural dependencies and incongruencies in the context of AI-based language and vision technologies.
Problems arise when these technologies interact with globally diverse societies and cultures, with different values and interpretive practices.
arXiv Detail & Related papers (2022-11-19T18:45:02Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Learning Robust Real-Time Cultural Transmission without Human Data [82.05222093231566]
We provide a method for generating zero-shot, high recall cultural transmission in artificially intelligent agents.
Our agents succeed at real-time cultural transmission from humans in novel contexts without using any pre-collected human data.
This paves the way for cultural evolution as an algorithm for developing artificial general intelligence.
arXiv Detail & Related papers (2022-03-01T19:32:27Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.