Enhancing Stance Classification on Social Media Using Quantified Moral Foundations
- URL: http://arxiv.org/abs/2310.09848v3
- Date: Sun, 29 Sep 2024 06:34:07 GMT
- Title: Enhancing Stance Classification on Social Media Using Quantified Moral Foundations
- Authors: Hong Zhang, Quoc-Nam Nguyen, Prasanta Bhattacharya, Wei Gao, Liang Ze Wong, Brandon Siyuan Loh, Joseph J. P. Simons, Jisun An,
- Abstract summary: We investigate how moral foundation dimensions can contribute to predicting an individual's stance on a given target.
We incorporate moral foundation features extracted from text, along with message semantic features, to classify stances at both message- and user-levels.
Preliminary results suggest that encoding moral foundations can enhance the performance of stance detection tasks.
- Score: 7.061680079778037
- License:
- Abstract: This study enhances stance detection on social media by incorporating deeper psychological attributes, specifically individuals' moral foundations. These theoretically-derived dimensions aim to provide a comprehensive profile of an individual's moral concerns which, in recent work, has been linked to behaviour in a range of domains, including society, politics, health, and the environment. In this paper, we investigate how moral foundation dimensions can contribute to predicting an individual's stance on a given target. Specifically we incorporate moral foundation features extracted from text, along with message semantic features, to classify stances at both message- and user-levels using both traditional machine learning models and large language models. Our preliminary results suggest that encoding moral foundations can enhance the performance of stance detection tasks and help illuminate the associations between specific moral foundations and online stances on target topics. The results highlight the importance of considering deeper psychological attributes in stance analysis and underscores the role of moral foundations in guiding online social behavior.
Related papers
- A Survey on Moral Foundation Theory and Pre-Trained Language Models: Current Advances and Challenges [2.435021773579434]
Moral values have deep roots in early civilizations, codified within norms and laws that regulated societal order and the common good.
The Moral Foundation Theory (MFT) is a well-established framework that identifies the core moral foundations underlying the manner in which different cultures shape individual and social lives.
Recent advancements in natural language processing, particularly Pre-trained Language Models (PLMs), have enabled the extraction and analysis of moral dimensions from textual data.
arXiv Detail & Related papers (2024-09-20T14:03:06Z) - MoralBERT: A Fine-Tuned Language Model for Capturing Moral Values in Social Discussions [4.747987317906765]
Moral values play a fundamental role in how we evaluate information, make decisions, and form judgements around important social issues.
Recent advances in Natural Language Processing (NLP) show that moral values can be gauged in human-generated textual content.
This paper introduces MoralBERT, a range of language representation models fine-tuned to capture moral sentiment in social discourse.
arXiv Detail & Related papers (2024-03-12T14:12:59Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement
Learning [4.2050490361120465]
A bottom-up learning approach may be more appropriate for studying and developing ethical behavior in AI agents.
We present a systematic analysis of the choices made by intrinsically-motivated RL agents whose rewards are based on moral theories.
We analyze the impact of different types of morality on the emergence of cooperation, defection or exploitation.
arXiv Detail & Related papers (2023-01-20T09:36:42Z) - Self-supervised Hypergraph Representation Learning for Sociological
Analysis [52.514283292498405]
We propose a fundamental methodology to support the further fusion of data mining techniques and sociological behavioral criteria.
First, we propose an effective hypergraph awareness and a fast line graph construction framework.
Second, we propose a novel hypergraph-based neural network to learn social influence flowing from users to users.
arXiv Detail & Related papers (2022-12-22T01:20:29Z) - ClarifyDelphi: Reinforced Clarification Questions with Defeasibility
Rewards for Social and Moral Situations [81.70195684646681]
We present ClarifyDelphi, an interactive system that learns to ask clarification questions.
We posit that questions whose potential answers lead to diverging moral judgments are the most informative.
Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition.
arXiv Detail & Related papers (2022-12-20T16:33:09Z) - Learning to Adapt Domain Shifts of Moral Values via Instance Weighting [74.94940334628632]
Classifying moral values in user-generated text from social media is critical to understanding community cultures.
Moral values and language usage can change across the social movements.
We propose a neural adaptation framework via instance weighting to improve cross-domain classification tasks.
arXiv Detail & Related papers (2022-04-15T18:15:41Z) - Identifying Morality Frames in Political Tweets using Relational
Learning [27.047907641503762]
Moral sentiment is motivated by its targets, which can correspond to individuals or collective entities.
We introduce morality frames, a representation framework for organizing moral attitudes directed at different entities.
We propose a relational learning model to predict moral attitudes towards entities and moral foundations jointly.
arXiv Detail & Related papers (2021-09-09T19:48:57Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z) - Text-based inference of moral sentiment change [11.188112005462536]
We present a text-based framework for investigating moral sentiment change of the public via longitudinal corpora.
We build our methodology by exploring moral biases learned from diachronic word embeddings.
Our work offers opportunities for applying natural language processing toward characterizing moral sentiment change in society.
arXiv Detail & Related papers (2020-01-20T18:52:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.