Enhancing Stance Classification with Quantified Moral Foundations
- URL: http://arxiv.org/abs/2310.09848v1
- Date: Sun, 15 Oct 2023 14:40:57 GMT
- Title: Enhancing Stance Classification with Quantified Moral Foundations
- Authors: Hong Zhang, Prasanta Bhattacharya, Wei Gao, Liang Ze Wong, Brandon
Siyuan Loh, Joseph J. P. Simons, Jisun An
- Abstract summary: We investigate how moral foundation dimensions can contribute to predicting an individual's stance on a given target.
We incorporate moral foundation features extracted from text, along with message semantic features, to classify stances at both message- and user-levels.
Preliminary results suggest that encoding moral foundations can enhance the performance of stance detection tasks.
- Score: 7.642826564345958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study enhances stance detection on social media by incorporating deeper
psychological attributes, specifically individuals' moral foundations. These
theoretically-derived dimensions aim to provide a comprehensive profile of an
individual's moral concerns which, in recent work, has been linked to behaviour
in a range of domains, including society, politics, health, and the
environment. In this paper, we investigate how moral foundation dimensions can
contribute to predicting an individual's stance on a given target. Specifically
we incorporate moral foundation features extracted from text, along with
message semantic features, to classify stances at both message- and user-levels
across a range of targets and models. Our preliminary results suggest that
encoding moral foundations can enhance the performance of stance detection
tasks and help illuminate the associations between specific moral foundations
and online stances on target topics. The results highlight the importance of
considering deeper psychological attributes in stance analysis and underscores
the role of moral foundations in guiding online social behavior.
Related papers
- The Odyssey of Commonsense Causality: From Foundational Benchmarks to Cutting-Edge Reasoning [70.16523526957162]
Understanding commonsense causality helps people understand the principles of the real world better.
Despite its significance, a systematic exploration of this topic is notably lacking.
Our work aims to provide a systematic overview, update scholars on recent advancements, and provide a pragmatic guide for beginners.
arXiv Detail & Related papers (2024-06-27T16:30:50Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - From Instructions to Intrinsic Human Values -- A Survey of Alignment
Goals for Big Models [48.326660953180145]
We conduct a survey of different alignment goals in existing work and trace their evolution paths to help identify the most essential goal.
Our analysis reveals a goal transformation from fundamental abilities to value orientation, indicating the potential of intrinsic human values as the alignment goal for enhanced LLMs.
arXiv Detail & Related papers (2023-08-23T09:11:13Z) - From computational ethics to morality: how decision-making algorithms
can help us understand the emergence of moral principles, the existence of an
optimal behaviour and our ability to discover it [0.0]
This paper adds to the efforts of evolutionary ethics to naturalize morality by providing insights derived from a computational ethics view.
We propose a stylized model of human decision-making, which is based on Reinforcement Learning.
arXiv Detail & Related papers (2023-07-20T14:39:08Z) - Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement
Learning [4.2050490361120465]
A bottom-up learning approach may be more appropriate for studying and developing ethical behavior in AI agents.
We present a systematic analysis of the choices made by intrinsically-motivated RL agents whose rewards are based on moral theories.
We analyze the impact of different types of morality on the emergence of cooperation, defection or exploitation.
arXiv Detail & Related papers (2023-01-20T09:36:42Z) - ClarifyDelphi: Reinforced Clarification Questions with Defeasibility
Rewards for Social and Moral Situations [81.70195684646681]
We present ClarifyDelphi, an interactive system that learns to ask clarification questions.
We posit that questions whose potential answers lead to diverging moral judgments are the most informative.
Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition.
arXiv Detail & Related papers (2022-12-20T16:33:09Z) - The Moral Foundations Reddit Corpus [3.0320832388397827]
Moral framing and sentiment can affect a variety of online and offline behaviors.
We present the Moral Foundations Reddit Corpus, a collection of 16,123 Reddit comments curated from 12 distinct subreddits.
arXiv Detail & Related papers (2022-08-10T20:08:10Z) - Learning to Adapt Domain Shifts of Moral Values via Instance Weighting [74.94940334628632]
Classifying moral values in user-generated text from social media is critical to understanding community cultures.
Moral values and language usage can change across the social movements.
We propose a neural adaptation framework via instance weighting to improve cross-domain classification tasks.
arXiv Detail & Related papers (2022-04-15T18:15:41Z) - Identifying Morality Frames in Political Tweets using Relational
Learning [27.047907641503762]
Moral sentiment is motivated by its targets, which can correspond to individuals or collective entities.
We introduce morality frames, a representation framework for organizing moral attitudes directed at different entities.
We propose a relational learning model to predict moral attitudes towards entities and moral foundations jointly.
arXiv Detail & Related papers (2021-09-09T19:48:57Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z) - Text-based inference of moral sentiment change [11.188112005462536]
We present a text-based framework for investigating moral sentiment change of the public via longitudinal corpora.
We build our methodology by exploring moral biases learned from diachronic word embeddings.
Our work offers opportunities for applying natural language processing toward characterizing moral sentiment change in society.
arXiv Detail & Related papers (2020-01-20T18:52:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.