The Moral Foundations Reddit Corpus
- URL: http://arxiv.org/abs/2208.05545v2
- Date: Thu, 18 Aug 2022 03:21:14 GMT
- Title: The Moral Foundations Reddit Corpus
- Authors: Jackson Trager, Alireza S. Ziabari, Aida Mostafazadeh Davani, Preni
Golazizian, Farzan Karimi-Malekabadi, Ali Omrani, Zhihe Li, Brendan Kennedy,
Nils Karl Reimer, Melissa Reyes, Kelsey Cheng, Mellow Wei, Christina
Merrifield, Arta Khosravi, Evans Alvarez, Morteza Dehghani
- Abstract summary: Moral framing and sentiment can affect a variety of online and offline behaviors.
We present the Moral Foundations Reddit Corpus, a collection of 16,123 Reddit comments curated from 12 distinct subreddits.
- Score: 3.0320832388397827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Moral framing and sentiment can affect a variety of online and offline
behaviors, including donation, pro-environmental action, political engagement,
and even participation in violent protests. Various computational methods in
Natural Language Processing (NLP) have been used to detect moral sentiment from
textual data, but in order to achieve better performances in such subjective
tasks, large sets of hand-annotated training data are needed. Previous corpora
annotated for moral sentiment have proven valuable, and have generated new
insights both within NLP and across the social sciences, but have been limited
to Twitter. To facilitate improving our understanding of the role of moral
rhetoric, we present the Moral Foundations Reddit Corpus, a collection of
16,123 Reddit comments that have been curated from 12 distinct subreddits,
hand-annotated by at least three trained annotators for 8 categories of moral
sentiment (i.e., Care, Proportionality, Equality, Purity, Authority, Loyalty,
Thin Morality, Implicit/Explicit Morality) based on the updated Moral
Foundations Theory (MFT) framework. We use a range of methodologies to provide
baseline moral-sentiment classification results for this new corpus, e.g.,
cross-domain classification and knowledge transfer.
Related papers
- The Moral Foundations Weibo Corpus [0.0]
Moral sentiments influence both online and offline environments, shaping behavioral styles and interaction patterns.
Existing corpora, while valuable, often face linguistic limitations.
This corpus consists of 25,671 Chinese comments on Weibo, encompassing six diverse topic areas.
arXiv Detail & Related papers (2024-11-14T17:32:03Z) - MoralBERT: A Fine-Tuned Language Model for Capturing Moral Values in Social Discussions [4.747987317906765]
Moral values play a fundamental role in how we evaluate information, make decisions, and form judgements around important social issues.
Recent advances in Natural Language Processing (NLP) show that moral values can be gauged in human-generated textual content.
This paper introduces MoralBERT, a range of language representation models fine-tuned to capture moral sentiment in social discourse.
arXiv Detail & Related papers (2024-03-12T14:12:59Z) - Morality is Non-Binary: Building a Pluralist Moral Sentence Embedding
Space using Contrastive Learning [4.925187725973777]
Pluralist moral philosophers argue that human morality can be deconstructed into a finite number of elements.
We build a pluralist moral sentence embedding space via a state-of-the-art contrastive learning approach.
Our results show that a pluralist approach to morality can be captured in an embedding space.
arXiv Detail & Related papers (2024-01-30T18:15:25Z) - What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts
and Rationales for Disambiguating Defeasible Social and Moral Situations [48.686872351114964]
Moral or ethical judgments rely heavily on the specific contexts in which they occur.
We introduce defeasible moral reasoning: a task to provide grounded contexts that make an action more or less morally acceptable.
We distill a high-quality dataset of 1.2M entries of contextualizations and rationales for 115K defeasible moral actions.
arXiv Detail & Related papers (2023-10-24T00:51:29Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via
Moral Discussions [71.25236662907056]
A moral dialogue system aligned with users' values could enhance conversation engagement and user connections.
We propose a framework, MoralDial, to train and evaluate moral dialogue systems.
arXiv Detail & Related papers (2022-12-21T02:21:37Z) - ClarifyDelphi: Reinforced Clarification Questions with Defeasibility
Rewards for Social and Moral Situations [81.70195684646681]
We present ClarifyDelphi, an interactive system that learns to ask clarification questions.
We posit that questions whose potential answers lead to diverging moral judgments are the most informative.
Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition.
arXiv Detail & Related papers (2022-12-20T16:33:09Z) - A Corpus for Understanding and Generating Moral Stories [84.62366141696901]
We propose two understanding tasks and two generation tasks to assess these abilities of machines.
We present STORAL, a new dataset of Chinese and English human-written moral stories.
arXiv Detail & Related papers (2022-04-20T13:12:36Z) - Learning to Adapt Domain Shifts of Moral Values via Instance Weighting [74.94940334628632]
Classifying moral values in user-generated text from social media is critical to understanding community cultures.
Moral values and language usage can change across the social movements.
We propose a neural adaptation framework via instance weighting to improve cross-domain classification tasks.
arXiv Detail & Related papers (2022-04-15T18:15:41Z) - Identifying Morality Frames in Political Tweets using Relational
Learning [27.047907641503762]
Moral sentiment is motivated by its targets, which can correspond to individuals or collective entities.
We introduce morality frames, a representation framework for organizing moral attitudes directed at different entities.
We propose a relational learning model to predict moral attitudes towards entities and moral foundations jointly.
arXiv Detail & Related papers (2021-09-09T19:48:57Z) - Text-based inference of moral sentiment change [11.188112005462536]
We present a text-based framework for investigating moral sentiment change of the public via longitudinal corpora.
We build our methodology by exploring moral biases learned from diachronic word embeddings.
Our work offers opportunities for applying natural language processing toward characterizing moral sentiment change in society.
arXiv Detail & Related papers (2020-01-20T18:52:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.