MOTIV: Visual Exploration of Moral Framing in Social Media
- URL: http://arxiv.org/abs/2403.14696v1
- Date: Fri, 15 Mar 2024 16:11:58 GMT
- Title: MOTIV: Visual Exploration of Moral Framing in Social Media
- Authors: Andrew Wentzel, Lauren Levine, Vipul Dhariwal, Zarah Fatemi, Abarai Bhattacharya, Barbara Di Eugenio, Andrew Rojecki, Elena Zheleva, G. Elisabeta Marai,
- Abstract summary: We present a visual computing framework for analyzing moral rhetoric on social media around controversial topics.
We propose a methodology for deconstructing and visualizing the textitwhen, textitwhere, and textitwho behind each of these moral dimensions as expressed in microblog data.
Our results indicate that this visual approach supports rapid, collaborative hypothesis testing, and can help give insights into the underlying moral values behind controversial political issues.
- Score: 9.314312944316962
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a visual computing framework for analyzing moral rhetoric on social media around controversial topics. Using Moral Foundation Theory, we propose a methodology for deconstructing and visualizing the \textit{when}, \textit{where}, and \textit{who} behind each of these moral dimensions as expressed in microblog data. We characterize the design of this framework, developed in collaboration with experts from language processing, communications, and causal inference. Our approach integrates microblog data with multiple sources of geospatial and temporal data, and leverages unsupervised machine learning (generalized additive models) to support collaborative hypothesis discovery and testing. We implement this approach in a system named MOTIV. We illustrate this approach on two problems, one related to Stay-at-home policies during the COVID-19 pandemic, and the other related to the Black Lives Matter movement. Through detailed case studies and discussions with collaborators, we identify several insights discovered regarding the different drivers of moral sentiment in social media. Our results indicate that this visual approach supports rapid, collaborative hypothesis testing, and can help give insights into the underlying moral values behind controversial political issues. Supplemental Material: https://osf.io/ygkzn/?view_only=6310c0886938415391d977b8aae8b749
Related papers
- Integrating Large Language Models with Graph-based Reasoning for Conversational Question Answering [58.17090503446995]
We focus on a conversational question answering task which combines the challenges of understanding questions in context and reasoning over evidence gathered from heterogeneous sources like text, knowledge graphs, tables, and infoboxes.
Our method utilizes a graph structured representation to aggregate information about a question and its context.
arXiv Detail & Related papers (2024-06-14T13:28:03Z) - Implicit Personalization in Language Models: A Systematic Study [94.29756463158853]
Implicit Personalization (IP) is a phenomenon of language models inferring a user's background from the implicit cues in the input prompts.
This work systematically studies IP through a rigorous mathematical formulation, a multi-perspective moral reasoning framework, and a set of case studies.
arXiv Detail & Related papers (2024-05-23T17:18:46Z) - MoralBERT: A Fine-Tuned Language Model for Capturing Moral Values in Social Discussions [4.747987317906765]
Moral values play a fundamental role in how we evaluate information, make decisions, and form judgements around important social issues.
Recent advances in Natural Language Processing (NLP) show that moral values can be gauged in human-generated textual content.
This paper introduces MoralBERT, a range of language representation models fine-tuned to capture moral sentiment in social discourse.
arXiv Detail & Related papers (2024-03-12T14:12:59Z) - SADAS: A Dialogue Assistant System Towards Remediating Norm Violations
in Bilingual Socio-Cultural Conversations [56.31816995795216]
Socially-Aware Dialogue Assistant System (SADAS) is designed to ensure that conversations unfold with respect and understanding.
Our system's novel architecture includes: (1) identifying the categories of norms present in the dialogue, (2) detecting potential norm violations, (3) evaluating the severity of these violations, and (4) implementing targeted remedies to rectify the breaches.
arXiv Detail & Related papers (2024-01-29T08:54:21Z) - Getting aligned on representational alignment [89.81370730647467]
We study the study of representational alignment in cognitive science, neuroscience, and machine learning.
There is limited knowledge transfer between research communities interested in representational alignment.
We propose a unifying framework that can serve as a common language between researchers studying representational alignment.
arXiv Detail & Related papers (2023-10-18T17:47:58Z) - "A Tale of Two Movements": Identifying and Comparing Perspectives in
#BlackLivesMatter and #BlueLivesMatter Movements-related Tweets using Weakly
Supervised Graph-based Structured Prediction [24.02026820625265]
Social media has become a major driver of social change, by facilitating the formation of online social movements.
We propose a weakly supervised graph-based approach that explicitly models perspectives in #BackLivesMatter-related tweets.
arXiv Detail & Related papers (2023-10-11T03:01:42Z) - Exploring Embeddings for Measuring Text Relatedness: Unveiling
Sentiments and Relationships in Online Comments [1.7230140898679147]
This paper investigates sentiment and semantic relationships among comments across various social media platforms.
It uses word embeddings to analyze components in sentences and documents.
Our analysis will enable a deeper understanding of the interconnectedness of online comments and will investigate the notion of the internet functioning as a large interconnected brain.
arXiv Detail & Related papers (2023-09-15T04:57:23Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Rumor Detection with Self-supervised Learning on Texts and Social Graph [101.94546286960642]
We propose contrastive self-supervised learning on heterogeneous information sources, so as to reveal their relations and characterize rumors better.
We term this framework as Self-supervised Rumor Detection (SRD)
Extensive experiments on three real-world datasets validate the effectiveness of SRD for automatic rumor detection on social media.
arXiv Detail & Related papers (2022-04-19T12:10:03Z) - Designing for Engaging with News using Moral Framing towards Bridging
Ideological Divides [6.177805579183265]
We present our work designing systems for addressing ideological division through educating U.S. news consumers to engage using a framework of fundamental human values known as Moral Foundations.
We design and implement a series of new features that encourage users to challenge their understanding of opposing views.
We conduct a field evaluation of each design with 71 participants in total over a period of 6-8 days, finding evidence suggesting users learned to re-frame their discourse in moral values of the opposing side.
arXiv Detail & Related papers (2021-01-27T07:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.