Echo Chambers in the Age of Algorithms: An Audit of Twitter's Friend Recommender System
- URL: http://arxiv.org/abs/2404.06422v1
- Date: Tue, 9 Apr 2024 16:12:22 GMT
- Title: Echo Chambers in the Age of Algorithms: An Audit of Twitter's Friend Recommender System
- Authors: Kayla Duskin, Joseph S. Schafer, Jevin D. West, Emma S. Spiro,
- Abstract summary: We conduct an algorithmic audit of Twitter's Who-To-Follow friend recommendation system.
We create automated Twitter accounts that initially follow left and right affiliated U.S. politicians during the 2022 U.S. midterm elections.
We find that while following the recommendation algorithm leads accounts into dense and reciprocal neighborhoods that structurally resemble echo chambers, the recommender also results in less political homogeneity of a user's network.
- Score: 2.8186456204337746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The presence of political misinformation and ideological echo chambers on social media platforms is concerning given the important role that these sites play in the public's exposure to news and current events. Algorithmic systems employed on these platforms are presumed to play a role in these phenomena, but little is known about their mechanisms and effects. In this work, we conduct an algorithmic audit of Twitter's Who-To-Follow friend recommendation system, the first empirical audit that investigates the impact of this algorithm in-situ. We create automated Twitter accounts that initially follow left and right affiliated U.S. politicians during the 2022 U.S. midterm elections and then grow their information networks using the platform's recommender system. We pair the experiment with an observational study of Twitter users who already follow the same politicians. Broadly, we find that while following the recommendation algorithm leads accounts into dense and reciprocal neighborhoods that structurally resemble echo chambers, the recommender also results in less political homogeneity of a user's network compared to accounts growing their networks through social endorsement. Furthermore, accounts that exclusively followed users recommended by the algorithm had fewer opportunities to encounter content centered on false or misleading election narratives compared to choosing friends based on social endorsement.
Related papers
- Incentivizing News Consumption on Social Media Platforms Using Large Language Models and Realistic Bot Accounts [4.06613683722116]
This project examines how to enhance users' exposure to and engagement with verified and ideologically balanced news on Twitter.
We created 28 bots that replied to users tweeting about sports, entertainment, or lifestyle with a contextual reply.
To test differential effects by gender of the bots, treated users were randomly assigned to receive responses by bots presented as female or male.
We find that the treated users followed more news accounts and the users in the female bot treatment were more likely to like news content than the control.
arXiv Detail & Related papers (2024-03-20T07:44:06Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - Design and analysis of tweet-based election models for the 2021 Mexican
legislative election [55.41644538483948]
We use a dataset of 15 million election-related tweets in the six months preceding election day.
We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods.
arXiv Detail & Related papers (2023-01-02T12:40:05Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Tweets2Stance: Users stance detection exploiting Zero-Shot Learning
Algorithms on Tweets [0.06372261626436675]
The aim of the study is to predict the stance of a Party p in regard to each statement s exploiting what the Twitter Party account wrote on Twitter.
Results obtained from multiple experiments show that Tweets2Stance can correctly predict the stance with a general minimum MAE of 1.13, which is a great achievement considering the task complexity.
arXiv Detail & Related papers (2022-04-22T14:00:11Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - SOK: Seeing and Believing: Evaluating the Trustworthiness of Twitter
Users [4.609388510200741]
Currently, there is no automated way of determining which news or users are credible and which are not.
In this work, we created a model which analysed the behaviour of50,000 politicians on Twitter.
We classified the political Twitter users as either trusted or untrusted using random forest, multilayer perceptron, and support vector machine.
arXiv Detail & Related papers (2021-07-16T17:39:32Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Inferring Political Preferences from Twitter [0.0]
Political Sentiment Analysis of social media helps the political strategists to scrutinize the performance of a party or candidate.
During the time of elections, the social networks get flooded with blogs, chats, debates and discussions about the prospects of political parties and politicians.
In this work, we chose to identify the inclination of political opinions present in Tweets by modelling it as a text classification problem using classical machine learning.
arXiv Detail & Related papers (2020-07-21T05:20:43Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.