MOKA: Moral Knowledge Augmentation for Moral Event Extraction
- URL: http://arxiv.org/abs/2311.09733v2
- Date: Thu, 23 May 2024 01:53:15 GMT
- Title: MOKA: Moral Knowledge Augmentation for Moral Event Extraction
- Authors: Xinliang Frederick Zhang, Winston Wu, Nick Beauchamp, Lu Wang,
- Abstract summary: News media often strive to minimize explicit moral language in news articles, yet most articles are dense with moral values as expressed through the reported events themselves.
To study this phenomenon, we annotate a new dataset, MORAL EVENTS, consisting of 5,494 structured event annotations on 474 news articles by diverse US media across the political spectrum.
We propose MOKA, a moral event extraction framework with MOral Knowledge Augmentation, which leverages knowledge derived from moral words and moral scenarios to produce structural representations of morality-bearing events.
- Score: 7.8192232188516115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: News media often strive to minimize explicit moral language in news articles, yet most articles are dense with moral values as expressed through the reported events themselves. However, values that are reflected in the intricate dynamics among participating entities and moral events are far more challenging for most NLP systems to detect, including LLMs. To study this phenomenon, we annotate a new dataset, MORAL EVENTS, consisting of 5,494 structured event annotations on 474 news articles by diverse US media across the political spectrum. We further propose MOKA, a moral event extraction framework with MOral Knowledge Augmentation, which leverages knowledge derived from moral words and moral scenarios to produce structural representations of morality-bearing events. Experiments show that MOKA outperforms competitive baselines across three moral event understanding tasks. Further analysis shows even ostensibly nonpartisan media engage in the selective reporting of moral events. Our data and codebase are available at https://github.com/launchnlp/MOKA.
Related papers
- EMONA: Event-level Moral Opinions in News Articles [14.898581862558112]
This paper initiates a new task to understand moral opinions towards events in news articles.
We have created a new dataset, EMONA, and annotated event-level moral opinions in news articles.
arXiv Detail & Related papers (2024-04-02T07:57:19Z) - MoralBERT: A Fine-Tuned Language Model for Capturing Moral Values in Social Discussions [4.747987317906765]
Moral values play a fundamental role in how we evaluate information, make decisions, and form judgements around important social issues.
Recent advances in Natural Language Processing (NLP) show that moral values can be gauged in human-generated textual content.
This paper introduces MoralBERT, a range of language representation models fine-tuned to capture moral sentiment in social discourse.
arXiv Detail & Related papers (2024-03-12T14:12:59Z) - Eagle: Ethical Dataset Given from Real Interactions [74.7319697510621]
We create datasets extracted from real interactions between ChatGPT and users that exhibit social biases, toxicity, and immoral problems.
Our experiments show that Eagle captures complementary aspects, not covered by existing datasets proposed for evaluation and mitigation of such ethical challenges.
arXiv Detail & Related papers (2024-02-22T03:46:02Z) - What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts
and Rationales for Disambiguating Defeasible Social and Moral Situations [48.686872351114964]
Moral or ethical judgments rely heavily on the specific contexts in which they occur.
We introduce defeasible moral reasoning: a task to provide grounded contexts that make an action more or less morally acceptable.
We distill a high-quality dataset of 1.2M entries of contextualizations and rationales for 115K defeasible moral actions.
arXiv Detail & Related papers (2023-10-24T00:51:29Z) - Moral Foundations of Large Language Models [6.6445242437134455]
Moral foundations theory (MFT) is a psychological assessment tool that decomposes human moral reasoning into five factors.
As large language models (LLMs) are trained on datasets collected from the internet, they may reflect the biases that are present in such corpora.
This paper uses MFT as a lens to analyze whether popular LLMs have acquired a bias towards a particular set of moral values.
arXiv Detail & Related papers (2023-10-23T20:05:37Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - ClarifyDelphi: Reinforced Clarification Questions with Defeasibility
Rewards for Social and Moral Situations [81.70195684646681]
We present ClarifyDelphi, an interactive system that learns to ask clarification questions.
We posit that questions whose potential answers lead to diverging moral judgments are the most informative.
Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition.
arXiv Detail & Related papers (2022-12-20T16:33:09Z) - The Moral Foundations Reddit Corpus [3.0320832388397827]
Moral framing and sentiment can affect a variety of online and offline behaviors.
We present the Moral Foundations Reddit Corpus, a collection of 16,123 Reddit comments curated from 12 distinct subreddits.
arXiv Detail & Related papers (2022-08-10T20:08:10Z) - A Corpus for Understanding and Generating Moral Stories [84.62366141696901]
We propose two understanding tasks and two generation tasks to assess these abilities of machines.
We present STORAL, a new dataset of Chinese and English human-written moral stories.
arXiv Detail & Related papers (2022-04-20T13:12:36Z) - An unsupervised framework for tracing textual sources of moral change [17.010859995410556]
We present a novel framework for tracing textual sources of moral change toward entities through time.
We evaluate our framework on a diverse set of data ranging from social media to news articles.
We show that our framework not only captures fine-grained human moral judgments, but also identifies coherent source topics of moral change triggered by historical events.
arXiv Detail & Related papers (2021-09-01T20:35:33Z) - Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life
Anecdotes [72.64975113835018]
Motivated by descriptive ethics, we investigate a novel, data-driven approach to machine ethics.
We introduce Scruples, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes.
Our dataset presents a major challenge to state-of-the-art neural language models, leaving significant room for improvement.
arXiv Detail & Related papers (2020-08-20T17:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.