Rational Silence and False Polarization: How Viewpoint Organizations and Recommender Systems Distort the Expression of Public Opinion
- URL: http://arxiv.org/abs/2403.06264v2
- Date: Thu, 28 Nov 2024 21:44:06 GMT
- Title: Rational Silence and False Polarization: How Viewpoint Organizations and Recommender Systems Distort the Expression of Public Opinion
- Authors: Atrisha Sarkar, Gillian K. Hadfield,
- Abstract summary: We show how platforms impact what observers of online discourse come to believe about community views.<n>We show that signals from ideological organizations encourage an increase in rhetorical intensity, leading to the 'rational silence' of moderate users.<n>We identify practical strategies platforms can implement, such as reducing exposure to signals from ideological organizations.
- Score: 4.419843514606336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI-based social media platforms has already transformed the nature of economic and social interaction. AI enables the massive scale and highly personalized nature of online information sharing that we now take for granted. Extensive attention has been devoted to the polarization that social media platforms appear to facilitate. However, a key implication of the transformation we are experiencing due to these AI-powered platforms has received much less attention: how platforms impact what observers of online discourse come to believe about community views. These observers include policymakers and legislators, who look to social media to gauge the prospects for policy and legislative change, as well as developers of AI models trained on large-scale internet data, whose outputs may similarly reflect a distorted view of public opinion. In this paper, we present a nested game-theoretic model to show how observed online opinion is produced by the interaction of the decisions made by users about whether and with what rhetorical intensity to share their opinions on a platform, the efforts of organizations (such as traditional media and advocacy organizations) that seek to encourage or discourage opinion-sharing online, and the operation of AI-powered recommender systems controlled by social media platforms. We show that signals from ideological organizations encourage an increase in rhetorical intensity, leading to the 'rational silence' of moderate users. This, in turn, creates a polarized impression of where average opinions lie. We also show that this observed polarization can also be amplified by recommender systems that encourage the formation of communities online that end up seeing a skewed sample of opinion. We also identify practical strategies platforms can implement, such as reducing exposure to signals from ideological organizations and a tailored approach to content moderation.
Related papers
- Emergence of human-like polarization among large language model agents [61.622596148368906]
We simulate a networked system involving thousands of large language model agents, discovering their social interactions, result in human-like polarization.
Similarities between humans and LLM agents raise concerns about their capacity to amplify societal polarization, but also hold the potential to serve as a valuable testbed for identifying plausible strategies to mitigate it.
arXiv Detail & Related papers (2025-01-09T11:45:05Z) - IOHunter: Graph Foundation Model to Uncover Online Information Operations [8.532129691916348]
We introduce a methodology designed to identify users orchestrating information operations, a.k.a. IO drivers, across various influence campaigns.
Our framework, named IOHunter, leverages the combined strengths of Language Models and Graph Neural Networks to improve generalization in supervised, scarcely-supervised, and cross-IO contexts.
This research marks a step toward developing Graph Foundation Models specifically tailored for the task of IO detection on social media platforms.
arXiv Detail & Related papers (2024-12-19T09:14:24Z) - Who Sets the Agenda on Social Media? Ideology and Polarization in Online Debates [34.82692226532414]
This study analyzes large-scale Twitter data from three global debates -- Climate Change, COVID-19, and the Russo-Ukrainian War.
Our findings reveal that discussions are not primarily shaped by specific categories of actors, such as media or activists, but by shared ideological alignment.
arXiv Detail & Related papers (2024-12-06T16:48:22Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - From Experts to the Public: Governing Multimodal Language Models in Politically Sensitive Video Analysis [48.14390493099495]
This paper examines the governance of large language models (MM-LLMs) through individual and collective deliberation.
We conducted a two-step study: first, interviews with 10 journalists established a baseline understanding of expert video interpretation; second, 114 individuals from the general public engaged in deliberation using Inclusive.AI.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - Carthago Delenda Est: Co-opetitive Indirect Information Diffusion Model
for Influence Operations on Online Social Media [6.236019068888737]
We introduce Diluvsion, an agent-based model for contested information propagation efforts on Twitter-like social media.
We account for engagement metrics in influencing stance adoption, non-social tie spreading of information, neutrality as a stance that can be spread, and themes that are analogous to media's framing effect and are symbiotic with respect to stance propagation.
arXiv Detail & Related papers (2024-02-02T21:15:24Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - On the steerability of large language models toward data-driven personas [98.9138902560793]
Large language models (LLMs) are known to generate biased responses where the opinions of certain groups and populations are underrepresented.
Here, we present a novel approach to achieve controllable generation of specific viewpoints using LLMs.
arXiv Detail & Related papers (2023-11-08T19:01:13Z) - Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - Disinformation Echo-Chambers on Facebook [0.27195102129095]
This chapter introduces a computational method designed to identify coordinated inauthentic behavior within Facebook groups.
The method focuses on analyzing posts, URLs, and images, revealing that certain Facebook groups engage in orchestrated campaigns.
These groups simultaneously share identical content, which may expose users to repeated encounters with false or misleading narratives.
arXiv Detail & Related papers (2023-09-14T14:33:16Z) - Social Diversity Reduces the Complexity and Cost of Fostering Fairness [63.70639083665108]
We investigate the effects of interference mechanisms which assume incomplete information and flexible standards of fairness.
We quantify the role of diversity and show how it reduces the need for information gathering.
Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness.
arXiv Detail & Related papers (2022-11-18T21:58:35Z) - Cascade-based Echo Chamber Detection [16.35164446890934]
echo chambers in social media have been under considerable scrutiny.
We propose a probabilistic generative model that explains social media footprints.
We show how our model can improve accuracy in auxiliary predictive tasks, such as stance detection and prediction of future propagations.
arXiv Detail & Related papers (2022-08-09T09:30:38Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - The drivers of online polarization: fitting models to data [0.0]
echo chamber effect and opinion polarization may be driven by several factors including human biases in information consumption and personalized recommendations produced by feed algorithms.
Until now, studies have mainly used opinion dynamic models to explore the mechanisms behind the emergence of polarization and echo chambers.
We provide a method to numerically compare the opinion distributions obtained from simulations with those measured on social media.
arXiv Detail & Related papers (2022-05-31T17:00:41Z) - Opinion dynamics in social networks: From models to data [0.0]
Opinions shape collective action, playing a role in democratic processes, the evolution of norms, and cultural change.
For decades, researchers in the social and natural sciences have tried to describe how shifting individual perspectives and social exchange lead to archetypal states of public opinion like consensus and polarization.
arXiv Detail & Related papers (2022-01-04T19:21:26Z) - Perspective-taking to Reduce Affective Polarization on Social Media [11.379010432760241]
We deploy a randomized field experiment through a browser extension to 1,611 participants on Twitter.
We find that simply exposing participants to "outgroup" feeds enhances engagement, but not an understanding of why others hold their political views.
framing the experience in familiar, empathic terms by prompting participants to recall a disagreement does not affect engagement, but does increase their ability to understand opposing views.
arXiv Detail & Related papers (2021-10-11T20:25:10Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Towards control of opinion diversity by introducing zealots into a
polarised social group [7.9603223299524535]
We explore a method to influence or even control the diversity of opinions within a polarised social group.
We leverage the voter model in which users hold binary opinions and repeatedly update their beliefs based on others they connect with.
We inject zealots into a polarised network in order to shift the average opinion towards any target value.
arXiv Detail & Related papers (2020-06-12T15:27:30Z) - An Iterative Approach for Identifying Complaint Based Tweets in Social
Media Platforms [76.9570531352697]
We propose an iterative methodology which aims to identify complaint based posts pertaining to the transport domain.
We perform comprehensive evaluations along with releasing a novel dataset for the research purposes.
arXiv Detail & Related papers (2020-01-24T22:23:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.