Values, Ethics, Morals? On the Use of Moral Concepts in NLP Research
- URL: http://arxiv.org/abs/2310.13915v1
- Date: Sat, 21 Oct 2023 06:04:10 GMT
- Title: Values, Ethics, Morals? On the Use of Moral Concepts in NLP Research
- Authors: Karina Vida, Judith Simon, Anne Lauscher
- Abstract summary: We provide an overview of some important ethical concepts stemming from philosophy.
We systematically survey the existing literature on moral NLP.
Our findings show that, for instance, most papers neither provide a clear definition of the terms they use nor adhere to definitions from philosophy.
- Score: 13.28561989789828
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: With language technology increasingly affecting individuals' lives, many
recent works have investigated the ethical aspects of NLP. Among other topics,
researchers focused on the notion of morality, investigating, for example,
which moral judgements language models make. However, there has been little to
no discussion of the terminology and the theories underpinning those efforts
and their implications. This lack is highly problematic, as it hides the works'
underlying assumptions and hinders a thorough and targeted scientific debate of
morality in NLP. In this work, we address this research gap by (a) providing an
overview of some important ethical concepts stemming from philosophy and (b)
systematically surveying the existing literature on moral NLP w.r.t. their
philosophical foundation, terminology, and data basis. For instance, we analyse
what ethical theory an approach is based on, how this decision is justified,
and what implications it entails. Our findings surveying 92 papers show that,
for instance, most papers neither provide a clear definition of the terms they
use nor adhere to definitions from philosophy. Finally, (c) we give three
recommendations for future research in the field. We hope our work will lead to
a more informed, careful, and sound discussion of morality in language
technology.
Related papers
- Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Ethical Frameworks and Computer Security Trolley Problems: Foundations
for Conversations [14.120888473204907]
We make and explore connections between moral questions in computer security research and ethics / moral philosophy.
We do not seek to define what is morally right or wrong, nor do we argue for one framework over another.
arXiv Detail & Related papers (2023-02-28T05:39:17Z) - ClarifyDelphi: Reinforced Clarification Questions with Defeasibility
Rewards for Social and Moral Situations [81.70195684646681]
We present ClarifyDelphi, an interactive system that learns to ask clarification questions.
We posit that questions whose potential answers lead to diverging moral judgments are the most informative.
Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition.
arXiv Detail & Related papers (2022-12-20T16:33:09Z) - AiSocrates: Towards Answering Ethical Quandary Questions [51.53350252548668]
AiSocrates is a system for deliberative exchange of different perspectives to an ethical quandary.
We show that AiSocrates generates promising answers to ethical quandary questions with multiple perspectives.
We argue that AiSocrates is a promising step toward developing an NLP system that incorporates human values explicitly by prompt instructions.
arXiv Detail & Related papers (2022-05-12T09:52:59Z) - Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of
Top-Down, Bottom-Up, and Hybrid Models for Theoretical and Applied Ethics in
Artificial Intelligence [0.0]
In this meta-ethnography, we explore three different angles of ethical artificial intelligence (AI) design implementation.
The novel contribution to this framework is the political angle, which constitutes ethics in AI either being determined by corporations and governments and imposed through policies or law (coming from the top)
There is a focus on reinforcement learning as an example of a bottom-up applied technical approach and AI ethics principles as a practical top-down approach.
arXiv Detail & Related papers (2022-04-15T18:47:49Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - A Word on Machine Ethics: A Response to Jiang et al. (2021) [36.955224006838584]
We focus on a single case study of the recently proposed Delphi model and offer a critique of the project's proposed method of automating morality judgments.
We conclude with a discussion of how machine ethics could usefully proceed, by focusing on current and near-future uses of technology.
arXiv Detail & Related papers (2021-11-07T19:31:51Z) - Use of Formal Ethical Reviews in NLP Literature: Historical Trends and
Current Practices [6.195761193461355]
Ethical aspects of research in language technologies have received much attention recently.
It is a standard practice to get a study involving human subjects reviewed and approved by a professional ethics committee/board of the institution.
With the rising concerns and discourse around the ethics of NLP, do we also observe a rise in formal ethical reviews of NLP studies?
arXiv Detail & Related papers (2021-06-02T12:12:59Z) - Case Study: Deontological Ethics in NLP [119.53038547411062]
We study one ethical theory, namely deontological ethics, from the perspective of NLP.
In particular, we focus on the generalization principle and the respect for autonomy through informed consent.
We provide four case studies to demonstrate how these principles can be used with NLP systems.
arXiv Detail & Related papers (2020-10-09T16:04:51Z) - Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life
Anecdotes [72.64975113835018]
Motivated by descriptive ethics, we investigate a novel, data-driven approach to machine ethics.
We introduce Scruples, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes.
Our dataset presents a major challenge to state-of-the-art neural language models, leaving significant room for improvement.
arXiv Detail & Related papers (2020-08-20T17:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.