Revisiting Algorithmic Audits of TikTok: Poor Reproducibility and Short-term Validity of Findings
- URL: http://arxiv.org/abs/2504.18140v1
- Date: Fri, 25 Apr 2025 07:50:06 GMT
- Title: Revisiting Algorithmic Audits of TikTok: Poor Reproducibility and Short-term Validity of Findings
- Authors: Matej Mosnar, Adam Skurla, Branislav Pecher, Matus Tibensky, Jan Jakubcik, Adrian Bindas, Peter Sakalik, Ivan Srba,
- Abstract summary: We study the drawbacks and generalizability of the existing sockpuppeting audits of TikTok recommender systems.<n>Our experiments also reveal that these one-shot audit findings often hold only in the short term.
- Score: 3.682493598086475
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media platforms are constantly shifting towards algorithmically curated content based on implicit or explicit user feedback. Regulators, as well as researchers, are calling for systematic social media algorithmic audits as this shift leads to enclosing users in filter bubbles and leading them to more problematic content. An important aspect of such audits is the reproducibility and generalisability of their findings, as it allows to draw verifiable conclusions and audit potential changes in algorithms over time. In this work, we study the reproducibility of the existing sockpuppeting audits of TikTok recommender systems, and the generalizability of their findings. In our efforts to reproduce the previous works, we find multiple challenges stemming from social media platform changes and content evolution, but also the research works themselves. These drawbacks limit the audit reproducibility and require an extensive effort altogether with inevitable adjustments to the auditing methodology. Our experiments also reveal that these one-shot audit findings often hold only in the short term, implying that the reproducibility and generalizability of the audits heavily depend on the methodological choices and the state of algorithms and content on the platform. This highlights the importance of reproducible audits that allow us to determine how the situation changes in time.
Related papers
- Robust ML Auditing using Prior Knowledge [3.513282443657269]
Audit manipulation occurs when a platform deliberately alters its answers to a regulator to pass an audit without modifying its answers to other users.<n>This paper introduces a novel approach to manipulation-proof auditing by taking into account the auditor's prior knowledge of the task solved by the platform.
arXiv Detail & Related papers (2025-05-07T20:46:48Z) - Identifying Aspects in Peer Reviews [61.374437855024844]
We develop a data-driven schema for deriving fine-grained aspects from a corpus of peer reviews.
We introduce a dataset of peer reviews augmented with aspects and show how it can be used for community-level review analysis.
arXiv Detail & Related papers (2025-04-09T14:14:42Z) - Variations in Relevance Judgments and the Shelf Life of Test Collections [50.060833338921945]
paradigm shift towards neural retrieval models affected the characteristics of modern test collections.
We reproduce prior work in the neural retrieval setting, showing that assessor disagreement does not affect system rankings.
We observe that some models substantially degrade with our new relevance judgments, and some have already reached the effectiveness of humans as rankers.
arXiv Detail & Related papers (2025-02-28T10:46:56Z) - Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes [50.37313459134418]
We study the effects of inference error on auditing for bias in one prominent application: black-box audit of ad delivery using paired ads.
We propose a way to mitigate the inference error when evaluating skew in ad delivery algorithms.
arXiv Detail & Related papers (2024-10-30T18:57:03Z) - A Game-Theoretic Analysis of Auditing Differentially Private Algorithms with Epistemically Disparate Herd [16.10098472773814]
This study examines the impact of herd audits on algorithm developers using the Stackelberg game approach.
By enhancing transparency and accountability, herd audit contributes to the responsible development of privacy-preserving algorithms.
arXiv Detail & Related papers (2024-04-24T20:34:27Z) - Under manipulations, are some AI models harder to audit? [2.699900017799093]
We study the feasibility of robust audits in realistic settings, in which models exhibit large capacities.
We first prove a constraining result: if a web platform uses models that may fit any data, no audit strategy can outperform random sampling.
We then relate the manipulability of audits to the capacity of the targeted models, using the Rademacher complexity.
arXiv Detail & Related papers (2024-02-14T09:38:09Z) - The Decisive Power of Indecision: Low-Variance Risk-Limiting Audits and Election Contestation via Marginal Mark Recording [51.82772358241505]
Risk-limiting audits (RLAs) are techniques for verifying the outcomes of large elections.
We define new families of audits that improve efficiency and offer advances in statistical power.
New audits are enabled by revisiting the standard notion of a cast-vote record so that it can declare multiple possible mark interpretations.
arXiv Detail & Related papers (2024-02-09T16:23:54Z) - From Chaos to Clarity: Claim Normalization to Empower Fact-Checking [57.024192702939736]
Claim Normalization (aka ClaimNorm) aims to decompose complex and noisy social media posts into more straightforward and understandable forms.
We propose CACN, a pioneering approach that leverages chain-of-thought and claim check-worthiness estimation.
Our experiments demonstrate that CACN outperforms several baselines across various evaluation measures.
arXiv Detail & Related papers (2023-10-22T16:07:06Z) - Who Audits the Auditors? Recommendations from a field scan of the
algorithmic auditing ecosystem [0.971392598996499]
We provide the first comprehensive field scan of the AI audit ecosystem.
We identify emerging best practices as well as methods and tools that are becoming commonplace.
We outline policy recommendations to improve the quality and impact of these audits.
arXiv Detail & Related papers (2023-10-04T01:40:03Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Continual Learning for Unsupervised Anomaly Detection in Continuous
Auditing of Financial Accounting Data [1.9659095632676094]
International audit standards require the direct assessment of a financial statement's underlying accounting journal entries.
Deep-learning inspired audit techniques emerged to examine vast quantities of journal entry data.
This work proposes a continual anomaly detection framework to overcome both challenges and designed to learn from a stream of journal entry data experiences.
arXiv Detail & Related papers (2021-12-25T09:21:14Z) - Everyday algorithm auditing: Understanding the power of everyday users
in surfacing harmful algorithmic behaviors [8.360589318502816]
We propose and explore the concept of everyday algorithm auditing, a process in which users detect, understand, and interrogate problematic machine behaviors.
We argue that everyday users are powerful in surfacing problematic machine behaviors that may elude detection via more centrally-organized forms of auditing.
arXiv Detail & Related papers (2021-05-06T21:50:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.