Towards Few-Shot Identification of Morality Frames using In-Context
Learning
- URL: http://arxiv.org/abs/2302.02029v1
- Date: Fri, 3 Feb 2023 23:26:59 GMT
- Title: Towards Few-Shot Identification of Morality Frames using In-Context
Learning
- Authors: Shamik Roy, Nishanth Sridhar Nakshatri and Dan Goldwasser
- Abstract summary: We study few-shot identification of a psycho-linguistic concept, Morality Frames, using Large Language Models (LLMs)
Morality frames are a representation framework that provides a holistic view of the moral sentiment expressed in text.
We propose prompting-based approaches using pretrained Large Language Models for identification of morality frames, relying on few-shot exemplars.
- Score: 24.29993132301275
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Data scarcity is a common problem in NLP, especially when the annotation
pertains to nuanced socio-linguistic concepts that require specialized
knowledge. As a result, few-shot identification of these concepts is desirable.
Few-shot in-context learning using pre-trained Large Language Models (LLMs) has
been recently applied successfully in many NLP tasks. In this paper, we study
few-shot identification of a psycho-linguistic concept, Morality Frames (Roy et
al., 2021), using LLMs. Morality frames are a representation framework that
provides a holistic view of the moral sentiment expressed in text, identifying
the relevant moral foundation (Haidt and Graham, 2007) and at a finer level of
granularity, the moral sentiment expressed towards the entities mentioned in
the text. Previous studies relied on human annotation to identify morality
frames in text which is expensive. In this paper, we propose prompting-based
approaches using pretrained Large Language Models for identification of
morality frames, relying only on few-shot exemplars. We compare our models'
performance with few-shot RoBERTa and found promising results.
Related papers
- The Moral Foundations Weibo Corpus [0.0]
Moral sentiments influence both online and offline environments, shaping behavioral styles and interaction patterns.
Existing corpora, while valuable, often face linguistic limitations.
This corpus consists of 25,671 Chinese comments on Weibo, encompassing six diverse topic areas.
arXiv Detail & Related papers (2024-11-14T17:32:03Z) - MoralBench: Moral Evaluation of LLMs [34.43699121838648]
This paper introduces a novel benchmark designed to measure and compare the moral reasoning capabilities of large language models (LLMs)
We present the first comprehensive dataset specifically curated to probe the moral dimensions of LLM outputs.
Our methodology involves a multi-faceted approach, combining quantitative analysis with qualitative insights from ethics scholars to ensure a thorough evaluation of model performance.
arXiv Detail & Related papers (2024-06-06T18:15:01Z) - Exploring and steering the moral compass of Large Language Models [55.2480439325792]
Large Language Models (LLMs) have become central to advancing automation and decision-making across various sectors.
This study proposes a comprehensive comparative analysis of the most advanced LLMs to assess their moral profiles.
arXiv Detail & Related papers (2024-05-27T16:49:22Z) - MoralBERT: A Fine-Tuned Language Model for Capturing Moral Values in Social Discussions [4.747987317906765]
Moral values play a fundamental role in how we evaluate information, make decisions, and form judgements around important social issues.
Recent advances in Natural Language Processing (NLP) show that moral values can be gauged in human-generated textual content.
This paper introduces MoralBERT, a range of language representation models fine-tuned to capture moral sentiment in social discourse.
arXiv Detail & Related papers (2024-03-12T14:12:59Z) - Learning Machine Morality through Experience and Interaction [3.7414804164475983]
Increasing interest in ensuring safety of next-generation Artificial Intelligence (AI) systems calls for novel approaches to embedding morality into autonomous agents.
We argue that more hybrid solutions are needed to create adaptable and robust, yet more controllable and interpretable agents.
arXiv Detail & Related papers (2023-12-04T11:46:34Z) - What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts
and Rationales for Disambiguating Defeasible Social and Moral Situations [48.686872351114964]
Moral or ethical judgments rely heavily on the specific contexts in which they occur.
We introduce defeasible moral reasoning: a task to provide grounded contexts that make an action more or less morally acceptable.
We distill a high-quality dataset of 1.2M entries of contextualizations and rationales for 115K defeasible moral actions.
arXiv Detail & Related papers (2023-10-24T00:51:29Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via
Moral Discussions [71.25236662907056]
A moral dialogue system aligned with users' values could enhance conversation engagement and user connections.
We propose a framework, MoralDial, to train and evaluate moral dialogue systems.
arXiv Detail & Related papers (2022-12-21T02:21:37Z) - Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object
Interactions [138.49522643425334]
Bongard-HOI is a new visual reasoning benchmark that focuses on compositional learning of human-object interactions from natural images.
It is inspired by two desirable characteristics from the classical Bongard problems (BPs): 1) few-shot concept learning, and 2) context-dependent reasoning.
Bongard-HOI presents a substantial challenge to today's visual recognition models.
arXiv Detail & Related papers (2022-05-27T07:36:29Z) - The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems [36.90292508433193]
Moral deviations are difficult to mitigate because moral judgments are not universal.
Moral Integrity Corpus captures the moral assumptions of 38k prompt-reply pairs.
We show that current neural language models can automatically generate new RoTs that reasonably describe previously unseen interactions.
arXiv Detail & Related papers (2022-04-06T18:10:53Z) - elBERto: Self-supervised Commonsense Learning for Question Answering [131.51059870970616]
We propose a Self-supervised Bidirectional Representation Learning of Commonsense framework, which is compatible with off-the-shelf QA model architectures.
The framework comprises five self-supervised tasks to force the model to fully exploit the additional training signals from contexts containing rich commonsense.
elBERto achieves substantial improvements on out-of-paragraph and no-effect questions where simple lexical similarity comparison does not help.
arXiv Detail & Related papers (2022-03-17T16:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.