Cross Fertilizing Empathy from Brain to Machine as a Value Alignment
Strategy
- URL: http://arxiv.org/abs/2312.07579v1
- Date: Sun, 10 Dec 2023 19:12:03 GMT
- Title: Cross Fertilizing Empathy from Brain to Machine as a Value Alignment
Strategy
- Authors: Devin Gonier, Adrian Adduci, Cassidy LoCascio
- Abstract summary: This paper argues empathy is necessary for this task, despite being often neglected in favor of more deductive approaches.
We offer an inside-out approach that grounds morality within the context of the brain as a basis for algorithmically understanding ethics and empathy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI Alignment research seeks to align human and AI goals to ensure independent
actions by a machine are always ethical. This paper argues empathy is necessary
for this task, despite being often neglected in favor of more deductive
approaches. We offer an inside-out approach that grounds morality within the
context of the brain as a basis for algorithmically understanding ethics and
empathy. These arguments are justified via a survey of relevant literature. The
paper concludes with a suggested experimental approach to future research and
some initial experimental observations.
Related papers
- Building Altruistic and Moral AI Agent with Brain-inspired Affective Empathy Mechanisms [7.3650155128839225]
This paper is dedicated to autonomously driving intelligent agents to acquire morally behaviors through human-like affective empathy mechanisms.
Based on the principle of moral utilitarianism, we design the moral reward function that integrates intrinsic empathy and extrinsic self-task goals.
arXiv Detail & Related papers (2024-10-29T09:19:27Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Quelle {é}thique pour quelle IA ? [0.0]
This study proposes an analysis of the different types of ethical approaches involved in the ethics of AI.
The author introduces to the contemporary need for and meaning of ethics, distinguishes it from other registers of normativities and underlines its inadequacy to formalization.
The study concludes with a reflection on the reasons why a human ethics of AI based on a pragmatic practice of contextual ethics remains necessary and irreducible to any formalization or automated treatment of the ethical questions that arise for humans.
arXiv Detail & Related papers (2024-05-21T08:13:02Z) - Towards a Feminist Metaethics of AI [0.0]
I argue that these insufficiencies could be mitigated by developing a research agenda for a feminist metaethics of AI.
Applying this perspective to the context of AI, I suggest that a feminist metaethics of AI would examine: (i) the continuity between theory and action in AI ethics; (ii) the real-life effects of AI ethics; (iii) the role and profile of those involved in AI ethics; and (iv) the effects of AI on power relations through methods that pay attention to context, emotions and narrative.
arXiv Detail & Related papers (2023-11-10T13:26:45Z) - A method for the ethical analysis of brain-inspired AI [0.8431877864777444]
This article examines some conceptual, technical, and ethical issues raised by the development and use of brain-inspired AI.
The aim of the paper is to introduce a method that can be applied to identify and address the ethical issues arising from brain-inspired AI.
arXiv Detail & Related papers (2023-05-18T12:56:27Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Case Study: Deontological Ethics in NLP [119.53038547411062]
We study one ethical theory, namely deontological ethics, from the perspective of NLP.
In particular, we focus on the generalization principle and the respect for autonomy through informed consent.
We provide four case studies to demonstrate how these principles can be used with NLP systems.
arXiv Detail & Related papers (2020-10-09T16:04:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.