Pandemic Culture Wars: Partisan Differences in the Moral Language of
COVID-19 Discussions
- URL: http://arxiv.org/abs/2305.18533v2
- Date: Tue, 17 Oct 2023 04:49:26 GMT
- Title: Pandemic Culture Wars: Partisan Differences in the Moral Language of
COVID-19 Discussions
- Authors: Ashwin Rao, Siyi Guo, Sze-Yuh Nina Wang, Fred Morstatter and Kristina
Lerman
- Abstract summary: We focus on five contentious issues: coronavirus origins, lockdowns, masking, education, and vaccines.
We use state-of-the-art computational methods to analyze moral language and infer political ideology.
Our findings reveal ideological differences in issue salience and moral language used by different groups.
- Score: 7.356252425142533
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Effective response to pandemics requires coordinated adoption of mitigation
measures, like masking and quarantines, to curb a virus's spread. However, as
the COVID-19 pandemic demonstrated, political divisions can hinder consensus on
the appropriate response. To better understand these divisions, our study
examines a vast collection of COVID-19-related tweets. We focus on five
contentious issues: coronavirus origins, lockdowns, masking, education, and
vaccines. We describe a weakly supervised method to identify issue-relevant
tweets and employ state-of-the-art computational methods to analyze moral
language and infer political ideology. We explore how partisanship and moral
language shape conversations about these issues. Our findings reveal
ideological differences in issue salience and moral language used by different
groups. We find that conservatives use more negatively-valenced moral language
than liberals and that political elites use moral rhetoric to a greater extent
than non-elites across most issues. Examining the evolution and moralization on
divisive issues can provide valuable insights into the dynamics of COVID-19
discussions and assist policymakers in better understanding the emergence of
ideological divisions.
Related papers
- Language Model Alignment in Multilingual Trolley Problems [138.5684081822807]
Building on the Moral Machine experiment, we develop a cross-lingual corpus of moral dilemma vignettes in over 100 languages called MultiTP.
Our analysis explores the alignment of 19 different LLMs with human judgments, capturing preferences across six moral dimensions.
We discover significant variance in alignment across languages, challenging the assumption of uniform moral reasoning in AI systems.
arXiv Detail & Related papers (2024-07-02T14:02:53Z) - Whose Emotions and Moral Sentiments Do Language Models Reflect? [5.4547979989237225]
Language models (LMs) are known to represent the perspectives of some social groups better than others.
We find significant misalignment of LMs with both ideological groups.
Even after steering the LMs towards specific ideological perspectives, the misalignment and liberal tendencies of the model persist.
arXiv Detail & Related papers (2024-02-16T22:34:53Z) - Moral consensus and divergence in partisan language use [0.0]
Polarization has increased substantially in political discourse, contributing to a widening partisan divide.
We analyzed large-scale, real-world language use in Reddit communities and in news outlets to uncover psychological dimensions along which partisan language is divided.
arXiv Detail & Related papers (2023-10-14T16:50:26Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - The Face of Populism: Examining Differences in Facial Emotional Expressions of Political Leaders Using Machine Learning [50.24983453990065]
We use a deep-learning approach to process a sample of 220 YouTube videos of political leaders from 15 different countries.
We observe statistically significant differences in the average score of negative emotions between groups of leaders with varying degrees of populist rhetoric.
arXiv Detail & Related papers (2023-04-19T18:32:49Z) - MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via
Moral Discussions [71.25236662907056]
A moral dialogue system aligned with users' values could enhance conversation engagement and user connections.
We propose a framework, MoralDial, to train and evaluate moral dialogue systems.
arXiv Detail & Related papers (2022-12-21T02:21:37Z) - ClarifyDelphi: Reinforced Clarification Questions with Defeasibility
Rewards for Social and Moral Situations [81.70195684646681]
We present ClarifyDelphi, an interactive system that learns to ask clarification questions.
We posit that questions whose potential answers lead to diverging moral judgments are the most informative.
Our work is ultimately inspired by studies in cognitive science that have investigated the flexibility in moral cognition.
arXiv Detail & Related papers (2022-12-20T16:33:09Z) - The Moral Foundations Reddit Corpus [3.0320832388397827]
Moral framing and sentiment can affect a variety of online and offline behaviors.
We present the Moral Foundations Reddit Corpus, a collection of 16,123 Reddit comments curated from 12 distinct subreddits.
arXiv Detail & Related papers (2022-08-10T20:08:10Z) - Learning to Adapt Domain Shifts of Moral Values via Instance Weighting [74.94940334628632]
Classifying moral values in user-generated text from social media is critical to understanding community cultures.
Moral values and language usage can change across the social movements.
We propose a neural adaptation framework via instance weighting to improve cross-domain classification tasks.
arXiv Detail & Related papers (2022-04-15T18:15:41Z) - Identifying Morality Frames in Political Tweets using Relational
Learning [27.047907641503762]
Moral sentiment is motivated by its targets, which can correspond to individuals or collective entities.
We introduce morality frames, a representation framework for organizing moral attitudes directed at different entities.
We propose a relational learning model to predict moral attitudes towards entities and moral foundations jointly.
arXiv Detail & Related papers (2021-09-09T19:48:57Z) - COVID-19 Pandemic: Identifying Key Issues using Social Media and Natural
Language Processing [14.54689130381201]
Social media data can reveal public perceptions and experience with respect to the pandemic.
We analyzed COVID-19-related comments collected from six social media platforms.
We identify 34 negative themes out of which 17 are economic, socio-political, educational, and political issues.
arXiv Detail & Related papers (2020-08-23T12:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.