A Year of the DSA Transparency Database: What it (Does Not) Reveal About Platform Moderation During the 2024 European Parliament Election
- URL: http://arxiv.org/abs/2504.06976v1
- Date: Wed, 09 Apr 2025 15:31:01 GMT
- Title: A Year of the DSA Transparency Database: What it (Does Not) Reveal About Platform Moderation During the 2024 European Parliament Election
- Authors: Gautam Kishore Shahi, Benedetta Tessa, Amaury Trujillo, Stefano Cresci,
- Abstract summary: We analyze 1.58 billion self-reported moderation actions taken by eight large social media platforms.<n>Our findings reveal a lack of adaptation in moderation strategies.<n>These results highlight the limitations of current self-regulatory approaches.
- Score: 5.170641855075114
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social media platforms face heightened risks during major political events; yet, how platforms adapt their moderation practices in response remains unclear. The Digital Services Act Transparency Database offers an unprecedented opportunity to systematically study content moderation at scale, enabling researchers and policymakers to assess platforms' compliance and effectiveness. Herein, we analyze 1.58 billion self-reported moderation actions taken by eight large social media platforms during an extended period of eight months surrounding the 2024 European Parliament elections. Our findings reveal a lack of adaptation in moderation strategies, as platforms did not exhibit significant changes in their enforcement behaviors surrounding the elections. This raises concerns about whether platforms adapted their moderation practices at all, or if structural limitations of the database concealed possible adjustments. Moreover, we found that noted transparency and accountability issues persist nearly a year after initial concerns were raised. These results highlight the limitations of current self-regulatory approaches and underscore the need for stronger enforcement and data access mechanisms to ensure that online platforms uphold their responsibility in safeguarding democratic processes.
Related papers
- On the Use of Proxies in Political Ad Targeting [49.61009579554272]
We show that major political advertisers circumvented mitigations by targeting proxy attributes.
Our findings have crucial implications for the ongoing discussion on the regulation of political advertising.
arXiv Detail & Related papers (2024-10-18T17:15:13Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Investigating LLMs as Voting Assistants via Contextual Augmentation: A Case Study on the European Parliament Elections 2024 [22.471701390730185]
In light of the recent 2024 European Parliament elections, we are investigating if LLMs can be used as Voting Advice Applications (VAAs)
We evaluate MISTRAL and MIXTRAL models and evaluate their accuracy in predicting the stance of political parties based on the latest "EU and I" voting assistance questionnaire.
We find that MIXTRAL is highly accurate with an 82% accuracy on average with a significant performance disparity across different political groups.
arXiv Detail & Related papers (2024-07-11T13:29:28Z) - Automated Transparency: A Legal and Empirical Analysis of the Digital Services Act Transparency Database [6.070078201123852]
The Digital Services Act (DSA) was adopted on 1 November 2022 with the ambition to set a global example in terms of accountability and transparency.
The DSA emphasizes the need for online platforms to report on their content moderation decisions (statements of reasons' - SoRs)
SoRs are currently made available in the DSA Transparency Database, launched by the European Commission in September 2023.
This study aims to understand whether the Transparency Database helps the DSA to live up to its transparency promises.
arXiv Detail & Related papers (2024-04-03T17:51:20Z) - The DSA Transparency Database: Auditing Self-reported Moderation Actions by Social Media [0.4597131601929317]
We analyze all 353.12M records submitted by the eight largest social media platforms in the EU during the first 100 days of the database.<n>Our findings have far-reaching implications for policymakers and scholars across diverse disciplines.
arXiv Detail & Related papers (2023-12-16T00:02:49Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Auditing Recommender Systems -- Putting the DSA into practice with a
risk-scenario-based approach [5.875955066693127]
European Union's Digital Services Act requires platforms to make algorithmic systems more transparent and follow due diligence obligations.
These requirements constitute an important legislative step towards mitigating the systemic risks posed by online platforms.
But the DSA lacks concrete guidelines to operationalise a viable audit process.
This void could foster the spread of 'audit-washing', that is, platforms exploiting audits to legitimise their practices and neglect responsibility.
arXiv Detail & Related papers (2023-02-09T10:48:37Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Anomaly Detection and Automated Labeling for Voter Registration File
Changes [0.0]
Voter eligibility in United States elections is determined by a patchwork of state databases containing information about which citizens are eligible to vote.
Monitoring changes to Voter Registration Files (VRFs) is crucial, given that a malicious actor wishing to disrupt the democratic process in the US would be well-advised to manipulate the contents of these files in order to achieve their goals.
We present a set of methods that make use of machine learning to ease the burden on analysts and administrators in protecting voter rolls.
arXiv Detail & Related papers (2021-06-16T21:48:31Z) - Leveraging Administrative Data for Bias Audits: Assessing Disparate
Coverage with Mobility Data for COVID-19 Policy [61.60099467888073]
We show how linking administrative data can enable auditing mobility data for bias.
We show that older and non-white voters are less likely to be captured by mobility data.
We show that allocating public health resources based on such mobility data could disproportionately harm high-risk elderly and minority groups.
arXiv Detail & Related papers (2020-11-14T02:04:14Z) - Verification of indefinite-horizon POMDPs [63.6726420864286]
This paper considers the verification problem for partially observable MDPs.
We present an abstraction-refinement framework extending previous instantiations of the Lovejoy-approach.
arXiv Detail & Related papers (2020-06-30T21:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.