Text-based inference of moral sentiment change
- URL: http://arxiv.org/abs/2001.07209v1
- Date: Mon, 20 Jan 2020 18:52:45 GMT
- Title: Text-based inference of moral sentiment change
- Authors: Jing Yi Xie, Renato Ferreira Pinto Jr., Graeme Hirst, Yang Xu
- Abstract summary: We present a text-based framework for investigating moral sentiment change of the public via longitudinal corpora.
We build our methodology by exploring moral biases learned from diachronic word embeddings.
Our work offers opportunities for applying natural language processing toward characterizing moral sentiment change in society.
- Score: 11.188112005462536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a text-based framework for investigating moral sentiment change of
the public via longitudinal corpora. Our framework is based on the premise that
language use can inform people's moral perception toward right or wrong, and we
build our methodology by exploring moral biases learned from diachronic word
embeddings. We demonstrate how a parameter-free model supports inference of
historical shifts in moral sentiment toward concepts such as slavery and
democracy over centuries at three incremental levels: moral relevance, moral
polarity, and fine-grained moral dimensions. We apply this methodology to
visualizing moral time courses of individual concepts and analyzing the
relations between psycholinguistic variables and rates of moral sentiment
change at scale. Our work offers opportunities for applying natural language
processing toward characterizing moral sentiment change in society.
Related papers
- Evaluating Moral Beliefs across LLMs through a Pluralistic Framework [22.0799438612003]
This study introduces a novel three-module framework to evaluate the moral beliefs of four prominent large language models.
We constructed a dataset containing 472 moral choice scenarios in Chinese, derived from moral words.
By ranking these moral choices, we discern the varying moral beliefs held by different language models.
arXiv Detail & Related papers (2024-11-06T04:52:38Z) - Morality is Non-Binary: Building a Pluralist Moral Sentence Embedding
Space using Contrastive Learning [4.925187725973777]
Pluralist moral philosophers argue that human morality can be deconstructed into a finite number of elements.
We build a pluralist moral sentence embedding space via a state-of-the-art contrastive learning approach.
Our results show that a pluralist approach to morality can be captured in an embedding space.
arXiv Detail & Related papers (2024-01-30T18:15:25Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via
Moral Discussions [71.25236662907056]
A moral dialogue system aligned with users' values could enhance conversation engagement and user connections.
We propose a framework, MoralDial, to train and evaluate moral dialogue systems.
arXiv Detail & Related papers (2022-12-21T02:21:37Z) - ClarifyDelphi: Reinforced Clarification Questions with Defeasibility
Rewards for Social and Moral Situations [81.70195684646681]
We present ClarifyDelphi, an interactive system that learns to ask clarification questions.
We posit that questions whose potential answers lead to diverging moral judgments are the most informative.
Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition.
arXiv Detail & Related papers (2022-12-20T16:33:09Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Learning to Adapt Domain Shifts of Moral Values via Instance Weighting [74.94940334628632]
Classifying moral values in user-generated text from social media is critical to understanding community cultures.
Moral values and language usage can change across the social movements.
We propose a neural adaptation framework via instance weighting to improve cross-domain classification tasks.
arXiv Detail & Related papers (2022-04-15T18:15:41Z) - Identifying Morality Frames in Political Tweets using Relational
Learning [27.047907641503762]
Moral sentiment is motivated by its targets, which can correspond to individuals or collective entities.
We introduce morality frames, a representation framework for organizing moral attitudes directed at different entities.
We propose a relational learning model to predict moral attitudes towards entities and moral foundations jointly.
arXiv Detail & Related papers (2021-09-09T19:48:57Z) - An unsupervised framework for tracing textual sources of moral change [17.010859995410556]
We present a novel framework for tracing textual sources of moral change toward entities through time.
We evaluate our framework on a diverse set of data ranging from social media to news articles.
We show that our framework not only captures fine-grained human moral judgments, but also identifies coherent source topics of moral change triggered by historical events.
arXiv Detail & Related papers (2021-09-01T20:35:33Z) - Contextualized moral inference [12.574316678945195]
We present a text-based approach that predicts people's intuitive judgment of moral vignettes.
We show that a contextualized representation offers a substantial advantage over alternative representations.
arXiv Detail & Related papers (2020-08-25T00:34:28Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.