Moral Stories: Situated Reasoning about Norms, Intents, Actions, and
their Consequences
- URL: http://arxiv.org/abs/2012.15738v1
- Date: Thu, 31 Dec 2020 17:28:01 GMT
- Title: Moral Stories: Situated Reasoning about Norms, Intents, Actions, and
their Consequences
- Authors: Denis Emelin, Ronan Le Bras, Jena D. Hwang, Maxwell Forbes, Yejin Choi
- Abstract summary: We investigate whether contemporary NLG models can function as behavioral priors for systems deployed in social settings.
We introduce 'Moral Stories', a crowd-sourced dataset of structured, branching narratives for the study of grounded, goal-oriented social reasoning.
- Score: 36.884156839960184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In social settings, much of human behavior is governed by unspoken rules of
conduct. For artificial systems to be fully integrated into social
environments, adherence to such norms is a central prerequisite. We investigate
whether contemporary NLG models can function as behavioral priors for systems
deployed in social settings by generating action hypotheses that achieve
predefined goals under moral constraints. Moreover, we examine if models can
anticipate likely consequences of (im)moral actions, or explain why certain
actions are preferable by generating relevant norms. For this purpose, we
introduce 'Moral Stories', a crowd-sourced dataset of structured, branching
narratives for the study of grounded, goal-oriented social reasoning. Finally,
we propose decoding strategies that effectively combine multiple expert models
to significantly improve the quality of generated actions, consequences, and
norms compared to strong baselines, e.g. though abductive reasoning.
Related papers
- The Goofus & Gallant Story Corpus for Practical Value Alignment [2.0938191327156037]
Values or principles are key elements of human society that influence people to behave and function according to an accepted standard set of social rules.
As AI systems are becoming ubiquitous in human society, it is a major concern that they could violate these norms or values and potentially cause harm.
This work presents a multi-modal dataset illustrating normative and non-normative behavior in real-life situations.
arXiv Detail & Related papers (2025-01-16T17:58:58Z) - A Grounded Observer Framework for Establishing Guardrails for Foundation Models in Socially Sensitive Domains [1.9116784879310025]
Given the complexities of foundation models, traditional techniques for constraining agent behavior cannot be directly applied.
We propose a grounded observer framework for constraining foundation model behavior that offers both behavioral guarantees and real-time variability.
arXiv Detail & Related papers (2024-12-23T22:57:05Z) - ClarityEthic: Explainable Moral Judgment Utilizing Contrastive Ethical Insights from Large Language Models [30.301864398780648]
We introduce a novel moral judgment approach called textitEthic that leverages LLMs' reasoning ability and contrastive learning to uncover relevant social norms.
Our method outperforms state-of-the-art approaches in moral judgment tasks.
arXiv Detail & Related papers (2024-12-17T12:22:44Z) - "One-Size-Fits-All"? Examining Expectations around What Constitute "Fair" or "Good" NLG System Behaviors [57.63649797577999]
We conduct case studies in which we perturb different types of identity-related language features (names, roles, locations, dialect, and style) in NLG system inputs.
We find that motivations for adaptation include social norms, cultural differences, feature-specific information, and accommodation.
In contrast, motivations for invariance include perspectives that favor prescriptivism, view adaptation as unnecessary or too difficult for NLG systems to do appropriately, and are wary of false assumptions.
arXiv Detail & Related papers (2023-10-23T23:00:34Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement
Learning [4.2050490361120465]
A bottom-up learning approach may be more appropriate for studying and developing ethical behavior in AI agents.
We present a systematic analysis of the choices made by intrinsically-motivated RL agents whose rewards are based on moral theories.
We analyze the impact of different types of morality on the emergence of cooperation, defection or exploitation.
arXiv Detail & Related papers (2023-01-20T09:36:42Z) - Aligning to Social Norms and Values in Interactive Narratives [89.82264844526333]
We focus on creating agents that act in alignment with socially beneficial norms and values in interactive narratives or text-based games.
We introduce the GALAD agent that uses the social commonsense knowledge present in specially trained language models to contextually restrict its action space to only those actions that are aligned with socially beneficial values.
arXiv Detail & Related papers (2022-05-04T09:54:33Z) - Social Chemistry 101: Learning to Reason about Social and Moral Norms [73.23298385380636]
We present Social Chemistry, a new conceptual formalism to study people's everyday social norms and moral judgments.
Social-Chem-101 is a large-scale corpus that catalogs 292k rules-of-thumb.
Our model framework, Neural Norm Transformer, learns and generalizes Social-Chem-101 to successfully reason about previously unseen situations.
arXiv Detail & Related papers (2020-11-01T20:16:45Z) - Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life
Anecdotes [72.64975113835018]
Motivated by descriptive ethics, we investigate a novel, data-driven approach to machine ethics.
We introduce Scruples, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes.
Our dataset presents a major challenge to state-of-the-art neural language models, leaving significant room for improvement.
arXiv Detail & Related papers (2020-08-20T17:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.