AITA Generating Moral Judgements of the Crowd with Reasoning
- URL: http://arxiv.org/abs/2310.18336v1
- Date: Sat, 21 Oct 2023 10:27:22 GMT
- Title: AITA Generating Moral Judgements of the Crowd with Reasoning
- Authors: Osama Bsher and Ameer Sabri
- Abstract summary: The project aims to generate comments with moral reasoning for stories with moral dilemmas using the AITA subreddit as a dataset.
We will leverage the vast amount of data on the forum with the goal of generating coherent comments that align with the norms and values of the AITA community.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Morality is a fundamental aspect of human behavior and ethics, influencing
how we interact with each other and the world around us. When faced with a
moral dilemma, a person's ability to make clear moral judgments can be clouded.
Due to many factors such as personal biases, emotions and situational factors
people can find it difficult to decide their best course of action. The
AmITheAsshole (AITA) subreddit is a forum on the social media platform Reddit
that helps people get clarity and objectivity on their predicaments. In the
forum people post anecdotes about moral dilemmas they are facing in their
lives, seeking validation for their actions or advice on how to navigate the
situation from the community. The morality of the actions in each post is
classified based on the collective opinion of the community into mainly two
labels, "Not The Asshole" (NTA) and "You Are The Asshole" (YTA). This project
aims to generate comments with moral reasoning for stories with moral dilemmas
using the AITA subreddit as a dataset. While past literature has explored the
classification of posts into labels (Alhassan et al., 2022), the generation of
comments remains a novel and challenging task. It involves understanding the
complex social and ethical considerations in each situation. To address this
challenge, we will leverage the vast amount of data on the forum with the goal
of generating coherent comments that align with the norms and values of the
AITA community. In this endeavor, we aim to evaluate state-of-the-art seq2seq
text generation models for their ability to make moral judgments similarly to
humans, ultimately producing concise comments providing clear moral stances and
advice for the poster.
Related papers
- Morality is Non-Binary: Building a Pluralist Moral Sentence Embedding
Space using Contrastive Learning [4.925187725973777]
Pluralist moral philosophers argue that human morality can be deconstructed into a finite number of elements.
We build a pluralist moral sentence embedding space via a state-of-the-art contrastive learning approach.
Our results show that a pluralist approach to morality can be captured in an embedding space.
arXiv Detail & Related papers (2024-01-30T18:15:25Z) - Moral Sparks in Social Media Narratives [14.025768295979184]
We examine interactions on social media to understand human moral judgments in real-life ethical scenarios.
Specifically, we examine posts from a popular Reddit subreddit (i.e., a subcommunity) called r/AmITheAsshole.
arXiv Detail & Related papers (2023-10-30T05:03:26Z) - What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts
and Rationales for Disambiguating Defeasible Social and Moral Situations [48.686872351114964]
Moral or ethical judgments rely heavily on the specific contexts in which they occur.
We introduce defeasible moral reasoning: a task to provide grounded contexts that make an action more or less morally acceptable.
We distill a high-quality dataset of 1.2M entries of contextualizations and rationales for 115K defeasible moral actions.
arXiv Detail & Related papers (2023-10-24T00:51:29Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - ClarifyDelphi: Reinforced Clarification Questions with Defeasibility
Rewards for Social and Moral Situations [81.70195684646681]
We present ClarifyDelphi, an interactive system that learns to ask clarification questions.
We posit that questions whose potential answers lead to diverging moral judgments are the most informative.
Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition.
arXiv Detail & Related papers (2022-12-20T16:33:09Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - A Corpus for Understanding and Generating Moral Stories [84.62366141696901]
We propose two understanding tasks and two generation tasks to assess these abilities of machines.
We present STORAL, a new dataset of Chinese and English human-written moral stories.
arXiv Detail & Related papers (2022-04-20T13:12:36Z) - Explainable Patterns for Distinction and Prediction of Moral Judgement
on Reddit [8.98624781242271]
The forum r/AmITheAsshole in Reddit hosts discussion on moral issues based on concrete narratives presented by users.
We build a new dataset of comments and also investigate the classification of the posts in the forum.
arXiv Detail & Related papers (2022-01-26T19:39:52Z) - Delphi: Towards Machine Ethics and Norms [38.8316885346292]
We identify four underlying challenges towards machine ethics and norms.
Our prototype model, Delphi, demonstrates strong promise of language-based commonsense moral reasoning.
We present Commonsense Norm Bank, a moral textbook customized for machines.
arXiv Detail & Related papers (2021-10-14T17:38:12Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.