TikTok's recommendations skewed towards Republican content during the 2024 U.S. presidential race
- URL: http://arxiv.org/abs/2501.17831v2
- Date: Wed, 07 May 2025 07:33:29 GMT
- Title: TikTok's recommendations skewed towards Republican content during the 2024 U.S. presidential race
- Authors: Hazem Ibrahim, HyunSeok Daniel Jang, Nouar Aldahoul, Aaron R. Kaufman, Talal Rahwan, Yasir Zaki,
- Abstract summary: TikTok is a major force among social media platforms with over a billion monthly active users worldwide and 170 million in the U.S.<n>Despite concerns, there is scant research investigating TikTok's recommendation algorithm for political biases.<n>We fill this gap by conducting 323 independent algorithmic audit experiments testing partisan content recommendations in the lead-up to the 2024 U.S. presidential elections.
- Score: 1.340487372205839
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: TikTok is a major force among social media platforms with over a billion monthly active users worldwide and 170 million in the United States. The platform's status as a key news source, particularly among younger demographics, raises concerns about its potential influence on politics in the U.S. and globally. Despite these concerns, there is scant research investigating TikTok's recommendation algorithm for political biases. We fill this gap by conducting 323 independent algorithmic audit experiments testing partisan content recommendations in the lead-up to the 2024 U.S. presidential elections. Specifically, we create hundreds of "sock puppet" TikTok accounts in Texas, New York, and Georgia, seeding them with varying partisan content and collecting algorithmic content recommendations for each of them. Collectively, these accounts viewed ~394,000 videos from April 30th to November 11th, 2024, which we label for political and partisan content. Our analysis reveals significant asymmetries in content distribution: Republican-seeded accounts received ~11.8% more party-aligned recommendations compared to their Democratic-seeded counterparts, and Democratic-seeded accounts were exposed to ~7.5% more opposite-party recommendations on average. These asymmetries exist across all three states and persist when accounting for video- and channel-level engagement metrics such as likes, views, shares, comments, and followers, and are driven primarily by negative partisanship content. Our findings provide insights into the inner workings of TikTok's recommendation algorithm during a critical election period, raising fundamental questions about platform neutrality.
Related papers
- The Role of Follow Networks and Twitter's Content Recommender on Partisan Skew and Rumor Exposure during the 2022 U.S. Midterm Election [38.85381861935736]
We use automated accounts to document Twitter's algorithmically curated and reverse chronological timelines throughout the U.S. 2022 midterm election.<n>We find that the algorithmic timeline measurably influences exposure to election content, partisan skew, and the prevalence of low-quality information and election rumors.
arXiv Detail & Related papers (2025-09-11T19:58:19Z) - How candidates evoke identity and issues on TikTok [2.664168105033125]
We examine the final six months before the 2024 US Presidential Election to understand how major campaigns used TikTok.<n>We frame our analysis around two political science theories. The first is the expressive (identity) model, where voters are motivated by their group memberships.<n>We also examine how often candidates attacked opponents, reflecting literature showing attacks are common in politics.
arXiv Detail & Related papers (2025-08-26T13:27:42Z) - Affective Polarization Amongst Swedish Politicians [0.0]
This study investigates affective polarization among Swedish politicians on Twitter from 2021 to 2023.
Negative partisanship becomes significantly more dominant when the in-group is defined at the party level.
Negative partisanship also proves to be a strategic choice for online visibility, attracting 3.18 more likes and 1.69 more retweets on average.
arXiv Detail & Related papers (2025-03-20T14:40:48Z) - Echo Chambers in the Age of Algorithms: An Audit of Twitter's Friend Recommender System [2.8186456204337746]
We conduct an algorithmic audit of Twitter's Who-To-Follow friend recommendation system.
We create automated Twitter accounts that initially follow left and right affiliated U.S. politicians during the 2022 U.S. midterm elections.
We find that while following the recommendation algorithm leads accounts into dense and reciprocal neighborhoods that structurally resemble echo chambers, the recommender also results in less political homogeneity of a user's network.
arXiv Detail & Related papers (2024-04-09T16:12:22Z) - Russo-Ukrainian War: Prediction and explanation of Twitter suspension [47.61306219245444]
This study focuses on the Twitter suspension mechanism and the analysis of shared content and features of user accounts that may lead to this.
We have obtained a dataset containing 107.7M tweets, originating from 9.8 million users, using Twitter API.
Our results reveal scam campaigns taking advantage of trending topics regarding the Russia-Ukrainian conflict for Bitcoin fraud, spam, and advertisement campaigns.
arXiv Detail & Related papers (2023-06-06T08:41:02Z) - Opinion Mining from YouTube Captions Using ChatGPT: A Case Study of
Street Interviews Polling the 2023 Turkish Elections [0.0]
We propose a novel approach for opinion mining, utilizing YouTube's auto-generated captions from public interviews as a data source.
We introduce an opinion mining framework using ChatGPT to mass-annotate voting intentions and motivations.
We report that ChatGPT can predict the preferred candidate with 97% accuracy and identify the correct voting motivation out of 13 possible choices with 71% accuracy based on the data collected from 325 interviews.
arXiv Detail & Related papers (2023-04-07T01:25:22Z) - Bias or Diversity? Unraveling Fine-Grained Thematic Discrepancy in U.S.
News Headlines [63.52264764099532]
We use a large dataset of 1.8 million news headlines from major U.S. media outlets spanning from 2014 to 2022.
We quantify the fine-grained thematic discrepancy related to four prominent topics - domestic politics, economic issues, social issues, and foreign affairs.
Our findings indicate that on domestic politics and social issues, the discrepancy can be attributed to a certain degree of media bias.
arXiv Detail & Related papers (2023-03-28T03:31:37Z) - Computational Assessment of Hyperpartisanship in News Titles [55.92100606666497]
We first adopt a human-guided machine learning framework to develop a new dataset for hyperpartisan news title detection.
Overall the Right media tends to use proportionally more hyperpartisan titles.
We identify three major topics including foreign issues, political systems, and societal issues that are suggestive of hyperpartisanship in news titles.
arXiv Detail & Related papers (2023-01-16T05:56:58Z) - Does Twitter know your political views? POLiTweets dataset and
semi-automatic method for political leaning discovery [0.0]
POLiTweets is the first publicly open Polish dataset for political affiliation discovery in a multiparty setup.
It consists of over 147k tweets from almost 10k Polish-writing users annotatedally and almost 40k tweets from 166 users annotated manually as a test set.
We used our data to study the aspects of domain shift in the context of topics and the type of content writers - ordinary citizens vs. professional politicians.
arXiv Detail & Related papers (2022-06-14T10:28:23Z) - How Algorithms Shape the Distribution of Political Advertising: Case
Studies of Facebook, Google, and TikTok [5.851101657703105]
We analyze a dataset containing over 800,000 ads and 2.5 million videos about the 2020 U.S. presidential election from Facebook, Google, and TikTok.
We conduct the first large scale data analysis of public data to critically evaluate how these platforms amplified or moderated the distribution of political advertisements.
We conclude with recommendations for how to improve the disclosures so that the public can hold the platforms and political advertisers accountable.
arXiv Detail & Related papers (2022-06-09T18:19:30Z) - An Empirical Investigation of Personalization Factors on TikTok [77.34726150561087]
Despite the importance of TikTok's algorithm to the platform's success and content distribution, little work has been done on the empirical analysis of the algorithm.
Using a sock-puppet audit methodology with a custom algorithm developed by us, we tested and analysed the effect of the language and location used to access TikTok.
We identify that the follow-feature has the strongest influence, followed by the like-feature and video view rate.
arXiv Detail & Related papers (2022-01-28T17:40:00Z) - Reaching the bubble may not be enough: news media role in online
political polarization [58.720142291102135]
A way of reducing polarization would be by distributing cross-partisan news among individuals with distinct political orientations.
This study investigates whether this holds in the context of nationwide elections in Brazil and Canada.
arXiv Detail & Related papers (2021-09-18T11:34:04Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Political audience diversity and news reliability in algorithmic ranking [54.23273310155137]
We propose using the political diversity of a website's audience as a quality signal.
Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 U.S. citizens, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards.
arXiv Detail & Related papers (2020-07-16T02:13:55Z) - Neutral bots probe political bias on social media [7.41821251168122]
We deploy neutral social bots who start following different news sources on Twitter to probe distinct biases emerging from platform mechanisms versus user interactions.
We find no strong or consistent evidence of political bias in the news feed.
The interactions of conservative accounts are skewed toward the right, whereas liberal accounts are exposed to moderate content shifting their experience toward the political center.
arXiv Detail & Related papers (2020-05-17T01:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.