Social Media Algorithms Can Shape Affective Polarization via Exposure to Antidemocratic Attitudes and Partisan Animosity
- URL: http://arxiv.org/abs/2411.14652v1
- Date: Fri, 22 Nov 2024 00:55:07 GMT
- Title: Social Media Algorithms Can Shape Affective Polarization via Exposure to Antidemocratic Attitudes and Partisan Animosity
- Authors: Tiziano Piccardi, Martin Saveski, Chenyan Jia, Jeffrey T. Hancock, Jeanne L. Tsai, Michael Bernstein,
- Abstract summary: We develop an approach to re-rank feeds in real-time to test the effects of content that is likely to polarize.
We observe more positive outparty feelings when AAPA exposure is decreased and more negative outparty feelings when AAPA exposure is increased.
These findings highlight a potential pathway for developing feed algorithms that mitigate affective polarization.
- Score: 11.77638733450929
- License:
- Abstract: There is widespread concern about the negative impacts of social media feed ranking algorithms on political polarization. Leveraging advancements in large language models (LLMs), we develop an approach to re-rank feeds in real-time to test the effects of content that is likely to polarize: expressions of antidemocratic attitudes and partisan animosity (AAPA). In a preregistered 10-day field experiment on X/Twitter with 1,256 consented participants, we increase or decrease participants' exposure to AAPA in their algorithmically curated feeds. We observe more positive outparty feelings when AAPA exposure is decreased and more negative outparty feelings when AAPA exposure is increased. Exposure to AAPA content also results in an immediate increase in negative emotions, such as sadness and anger. The interventions do not significantly impact traditional engagement metrics such as re-post and favorite rates. These findings highlight a potential pathway for developing feed algorithms that mitigate affective polarization by addressing content that undermines the shared values required for a healthy democracy.
Related papers
- On the Use of Proxies in Political Ad Targeting [49.61009579554272]
We show that major political advertisers circumvented mitigations by targeting proxy attributes.
Our findings have crucial implications for the ongoing discussion on the regulation of political advertising.
arXiv Detail & Related papers (2024-10-18T17:15:13Z) - Filter bubbles and affective polarization in user-personalized large
language model outputs [0.15540058359482856]
Large language models (LLMs) have led to a push for increased personalization of model outputs to individual users.
We explore how prompting a leading large language model, ChatGPT-3.5, with a user's political affiliation prior to asking factual questions leads to differing results.
arXiv Detail & Related papers (2023-10-31T18:19:28Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - POLE: Polarized Embedding for Signed Networks [2.6546685109604304]
Recent advances in machine learning for signed networks hold the promise to guide small interventions with the goal of reducing polarization in social media.
Existing models are especially ineffective in predicting conflicts (or negative links) among users.
This is due to a strong correlation between link signs and the network structure.
We propose POLE, a signed embedding method for polarized graphs that captures both topological and signed similarities jointly.
arXiv Detail & Related papers (2021-10-17T19:28:47Z) - Perspective-taking to Reduce Affective Polarization on Social Media [11.379010432760241]
We deploy a randomized field experiment through a browser extension to 1,611 participants on Twitter.
We find that simply exposing participants to "outgroup" feeds enhances engagement, but not an understanding of why others hold their political views.
framing the experience in familiar, empathic terms by prompting participants to recall a disagreement does not affect engagement, but does increase their ability to understand opposing views.
arXiv Detail & Related papers (2021-10-11T20:25:10Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Designing Recommender Systems to Depolarize [0.32634122554913997]
Polarization is implicated in the erosion of democracy and the progression to violence.
While algorithm-driven social media does not seem to be a primary driver of polarization at the country level, it could be a useful intervention point in polarized societies.
This paper examines algorithmic depolarization interventions with the goal of conflict transformation.
arXiv Detail & Related papers (2021-07-11T03:23:42Z) - Towards control of opinion diversity by introducing zealots into a
polarised social group [7.9603223299524535]
We explore a method to influence or even control the diversity of opinions within a polarised social group.
We leverage the voter model in which users hold binary opinions and repeatedly update their beliefs based on others they connect with.
We inject zealots into a polarised network in order to shift the average opinion towards any target value.
arXiv Detail & Related papers (2020-06-12T15:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.