Dynamics of Polarization Under Normative Institutions and Opinion
Expression Stewarding
- URL: http://arxiv.org/abs/2403.06264v1
- Date: Sun, 10 Mar 2024 17:02:19 GMT
- Title: Dynamics of Polarization Under Normative Institutions and Opinion
Expression Stewarding
- Authors: Atrisha Sarkar, Gillian K. Hadfield
- Abstract summary: We establish that human normativity, that is, individual expression of normative opinions based on beliefs about the population, can lead to population-level polarization.
Using a game-theoretic model, we establish that individuals with more extreme opinions will have more extreme rhetoric and higher misperceptions about their outgroup members.
- Score: 5.22145960878624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although there is mounting empirical evidence for the increase in affective
polarization, few mechanistic models can explain its emergence at the
population level. The question of how such a phenomenon can emerge from
divergent opinions of a population on an ideological issue is still an open
issue. In this paper, we establish that human normativity, that is, individual
expression of normative opinions based on beliefs about the population, can
lead to population-level polarization when ideological institutions distort
beliefs in accordance with their objective of moving expressed opinion to one
extreme. Using a game-theoretic model, we establish that individuals with more
extreme opinions will have more extreme rhetoric and higher misperceptions
about their outgroup members. Our model also shows that when social
recommendation systems mediate institutional signals, we can observe the
formation of different institutional communities, each with its unique
community structure and characteristics. Using the model, we identify practical
strategies platforms can implement, such as reducing exposure to signals from
ideological institutions and a tailored approach to content moderation, both of
which can rectify the affective polarization problem within its purview.
Related papers
- Emergence of human-like polarization among large language model agents [61.622596148368906]
We simulate a networked system involving thousands of large language model agents, discovering their social interactions, result in human-like polarization.
Similarities between humans and LLM agents raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate it.
arXiv Detail & Related papers (2025-01-09T11:45:05Z) - IOHunter: Graph Foundation Model to Uncover Online Information Operations [8.532129691916348]
Social media platforms have become vital spaces for public discourse, serving as modern agor'as where a wide range of voices influence societal narratives.
The spread of misinformation, false news, and misleading claims threatens democratic processes and societal cohesion.
We introduce a methodology designed to identify users orchestrating information operations, a.k.a. textitIO drivers, across various influence campaigns.
arXiv Detail & Related papers (2024-12-19T09:14:24Z) - Who Sets the Agenda on Social Media? Ideology and Polarization in Online Debates [34.82692226532414]
This study analyzes large-scale Twitter data from three global debates -- Climate Change, COVID-19, and the Russo-Ukrainian War.
Our findings reveal that discussions are not primarily shaped by specific categories of actors, such as media or activists, but by shared ideological alignment.
arXiv Detail & Related papers (2024-12-06T16:48:22Z) - From Experts to the Public: Governing Multimodal Language Models in Politically Sensitive Video Analysis [48.14390493099495]
This paper examines the governance of large language models (MM-LLMs) through individual and collective deliberation.
We conducted a two-step study: first, interviews with 10 journalists established a baseline understanding of expert video interpretation; second, 114 individuals from the general public engaged in deliberation using Inclusive.AI.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - Carthago Delenda Est: Co-opetitive Indirect Information Diffusion Model
for Influence Operations on Online Social Media [6.236019068888737]
We introduce Diluvsion, an agent-based model for contested information propagation efforts on Twitter-like social media.
We account for engagement metrics in influencing stance adoption, non-social tie spreading of information, neutrality as a stance that can be spread, and themes that are analogous to media's framing effect and are symbiotic with respect to stance propagation.
arXiv Detail & Related papers (2024-02-02T21:15:24Z) - Disinformation Echo-Chambers on Facebook [0.27195102129095]
This chapter introduces a computational method designed to identify coordinated inauthentic behavior within Facebook groups.
The method focuses on analyzing posts, URLs, and images, revealing that certain Facebook groups engage in orchestrated campaigns.
These groups simultaneously share identical content, which may expose users to repeated encounters with false or misleading narratives.
arXiv Detail & Related papers (2023-09-14T14:33:16Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Perspective-taking to Reduce Affective Polarization on Social Media [11.379010432760241]
We deploy a randomized field experiment through a browser extension to 1,611 participants on Twitter.
We find that simply exposing participants to "outgroup" feeds enhances engagement, but not an understanding of why others hold their political views.
framing the experience in familiar, empathic terms by prompting participants to recall a disagreement does not affect engagement, but does increase their ability to understand opposing views.
arXiv Detail & Related papers (2021-10-11T20:25:10Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - An Iterative Approach for Identifying Complaint Based Tweets in Social
Media Platforms [76.9570531352697]
We propose an iterative methodology which aims to identify complaint based posts pertaining to the transport domain.
We perform comprehensive evaluations along with releasing a novel dataset for the research purposes.
arXiv Detail & Related papers (2020-01-24T22:23:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.