Designing for Engaging with News using Moral Framing towards Bridging
Ideological Divides
- URL: http://arxiv.org/abs/2101.11231v4
- Date: Fri, 21 Jan 2022 05:29:18 GMT
- Title: Designing for Engaging with News using Moral Framing towards Bridging
Ideological Divides
- Authors: Jessica Wang, Amy Zhang, David Karger
- Abstract summary: We present our work designing systems for addressing ideological division through educating U.S. news consumers to engage using a framework of fundamental human values known as Moral Foundations.
We design and implement a series of new features that encourage users to challenge their understanding of opposing views.
We conduct a field evaluation of each design with 71 participants in total over a period of 6-8 days, finding evidence suggesting users learned to re-frame their discourse in moral values of the opposing side.
- Score: 6.177805579183265
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Society is showing signs of strong ideological polarization. When pushed to
seek perspectives different from their own, people often reject diverse ideas
or find them unfathomable. Work has shown that framing controversial issues
using the values of the audience can improve understanding of opposing views.
In this paper, we present our work designing systems for addressing ideological
division through educating U.S. news consumers to engage using a framework of
fundamental human values known as Moral Foundations. We design and implement a
series of new features that encourage users to challenge their understanding of
opposing views, including annotation of moral frames in news articles,
discussion of those frames via inline comments, and recommendations based on
relevant moral frames. We describe two versions of features -- the first
covering a suite of ways to interact with moral framing in news, and the second
tailored towards collaborative annotation and discussion. We conduct a field
evaluation of each design iteration with 71 participants in total over a period
of 6-8 days, finding evidence suggesting users learned to re-frame their
discourse in moral values of the opposing side. Our work provides several
design considerations for building systems to engage with moral framing.
Related papers
- MOTIV: Visual Exploration of Moral Framing in Social Media [9.314312944316962]
We present a visual computing framework for analyzing moral rhetoric on social media around controversial topics.
We propose a methodology for deconstructing and visualizing the textitwhen, textitwhere, and textitwho behind each of these moral dimensions as expressed in microblog data.
Our results indicate that this visual approach supports rapid, collaborative hypothesis testing, and can help give insights into the underlying moral values behind controversial political issues.
arXiv Detail & Related papers (2024-03-15T16:11:58Z) - MoralBERT: A Fine-Tuned Language Model for Capturing Moral Values in Social Discussions [4.747987317906765]
Moral values play a fundamental role in how we evaluate information, make decisions, and form judgements around important social issues.
Recent advances in Natural Language Processing (NLP) show that moral values can be gauged in human-generated textual content.
This paper introduces MoralBERT, a range of language representation models fine-tuned to capture moral sentiment in social discourse.
arXiv Detail & Related papers (2024-03-12T14:12:59Z) - Lego: Learning to Disentangle and Invert Personalized Concepts Beyond Object Appearance in Text-to-Image Diffusion Models [60.80960965051388]
Adjectives and verbs are entangled with nouns (subject)
Lego disentangles concepts from their associated subjects using a simple yet effective Subject Separation step.
Lego-generated concepts were preferred over 70% of the time when compared to the baseline.
arXiv Detail & Related papers (2023-11-23T07:33:38Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - MindDial: Belief Dynamics Tracking with Theory-of-Mind Modeling for Situated Neural Dialogue Generation [62.44907105496227]
MindDial is a novel conversational framework that can generate situated free-form responses with theory-of-mind modeling.
We introduce an explicit mind module that can track the speaker's belief and the speaker's prediction of the listener's belief.
Our framework is applied to both prompting and fine-tuning-based models, and is evaluated across scenarios involving both common ground alignment and negotiation.
arXiv Detail & Related papers (2023-06-27T07:24:32Z) - Towards Few-Shot Identification of Morality Frames using In-Context
Learning [24.29993132301275]
We study few-shot identification of a psycho-linguistic concept, Morality Frames, using Large Language Models (LLMs)
Morality frames are a representation framework that provides a holistic view of the moral sentiment expressed in text.
We propose prompting-based approaches using pretrained Large Language Models for identification of morality frames, relying on few-shot exemplars.
arXiv Detail & Related papers (2023-02-03T23:26:59Z) - MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via
Moral Discussions [71.25236662907056]
A moral dialogue system aligned with users' values could enhance conversation engagement and user connections.
We propose a framework, MoralDial, to train and evaluate moral dialogue systems.
arXiv Detail & Related papers (2022-12-21T02:21:37Z) - Persua: A Visual Interactive System to Enhance the Persuasiveness of
Arguments in Online Discussion [52.49981085431061]
Enhancing people's ability to write persuasive arguments could contribute to the effectiveness and civility in online communication.
We derived four design goals for a tool that helps users improve the persuasiveness of arguments in online discussions.
Persua is an interactive visual system that provides example-based guidance on persuasive strategies to enhance the persuasiveness of arguments.
arXiv Detail & Related papers (2022-04-16T08:07:53Z) - Separating Skills and Concepts for Novel Visual Question Answering [66.46070380927372]
Generalization to out-of-distribution data has been a problem for Visual Question Answering (VQA) models.
"Skills" are visual tasks, such as counting or attribute recognition, and are applied to "concepts" mentioned in the question.
We present a novel method for learning to compose skills and concepts that separates these two factors implicitly within a model.
arXiv Detail & Related papers (2021-07-19T18:55:10Z) - Text-based inference of moral sentiment change [11.188112005462536]
We present a text-based framework for investigating moral sentiment change of the public via longitudinal corpora.
We build our methodology by exploring moral biases learned from diachronic word embeddings.
Our work offers opportunities for applying natural language processing toward characterizing moral sentiment change in society.
arXiv Detail & Related papers (2020-01-20T18:52:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.