Automated Transparency: A Legal and Empirical Analysis of the Digital Services Act Transparency Database
- URL: http://arxiv.org/abs/2404.02894v2
- Date: Fri, 3 May 2024 12:26:20 GMT
- Title: Automated Transparency: A Legal and Empirical Analysis of the Digital Services Act Transparency Database
- Authors: Rishabh Kaushal, Jacob van de Kerkhof, Catalina Goanta, Gerasimos Spanakis, Adriana Iamnitchi,
- Abstract summary: The Digital Services Act (DSA) was adopted on 1 November 2022 with the ambition to set a global example in terms of accountability and transparency.
The DSA emphasizes the need for online platforms to report on their content moderation decisions (statements of reasons' - SoRs)
SoRs are currently made available in the DSA Transparency Database, launched by the European Commission in September 2023.
This study aims to understand whether the Transparency Database helps the DSA to live up to its transparency promises.
- Score: 6.070078201123852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Digital Services Act (DSA) is a much awaited platforms liability reform in the European Union that was adopted on 1 November 2022 with the ambition to set a global example in terms of accountability and transparency. Among other obligations, the DSA emphasizes the need for online platforms to report on their content moderation decisions (`statements of reasons' - SoRs), which is a novel transparency mechanism we refer to as automated transparency in this study. SoRs are currently made available in the DSA Transparency Database, launched by the European Commission in September 2023. The DSA Transparency Database marks a historical achievement in platform governance, and allows investigations about the actual transparency gains, both at structure level as well as at the level of platform compliance. This study aims to understand whether the Transparency Database helps the DSA to live up to its transparency promises. We use legal and empirical arguments to show that while there are some transparency gains, compliance remains problematic, as the current database structure allows for a lot of discretion from platforms in terms of transparency practices. In our empirical study, we analyze a representative sample of the Transparency Database (131m SoRs) submitted in November 2023, to characterise and evaluate platform content moderation practices.
Related papers
- AI data transparency: an exploration through the lens of AI incidents [2.255682336735152]
This research explores the status of public documentation about data practices within AI systems generating public concern.
We highlight a need to develop systematic ways of monitoring AI data transparency that account for the diversity of AI system types.
arXiv Detail & Related papers (2024-09-05T07:23:30Z) - The Foundation Model Transparency Index v1.1: May 2024 [54.78174872757794]
The October 2023 Index assessed 10 major foundation model developers on 100 transparency indicators.
At the time, developers publicly disclosed very limited information with the average score being 37 out of 100.
We find that developers now score 58 out of 100 on average, a 21 point improvement over v1.0.
arXiv Detail & Related papers (2024-07-17T18:03:37Z) - Foundation Model Transparency Reports [61.313836337206894]
We propose Foundation Model Transparency Reports, drawing upon the transparency reporting practices in social media.
We identify 6 design principles given the successes and shortcomings of social media transparency reporting.
Well-designed transparency reports could reduce compliance costs, in part due to overlapping regulatory requirements across different jurisdictions.
arXiv Detail & Related papers (2024-02-26T03:09:06Z) - The DSA Transparency Database: Auditing Self-reported Moderation Actions by Social Media [0.4597131601929317]
We analyze all 353.12M records submitted by the eight largest social media platforms in the EU during the first 100 days of the database.
Our findings have far-reaching implications for policymakers and scholars across diverse disciplines.
arXiv Detail & Related papers (2023-12-16T00:02:49Z) - The Foundation Model Transparency Index [55.862805799199194]
The Foundation Model Transparency Index specifies 100 indicators that codify transparency for foundation models.
We score developers in relation to their practices for their flagship foundation model.
Overall, the Index establishes the level of transparency today to drive progress on foundation model governance.
arXiv Detail & Related papers (2023-10-19T17:39:02Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
Act? [0.8287206589886881]
European Union has introduced detailed requirements of transparency for AI systems.
There is a fundamental difference between XAI and the Act regarding what transparency is.
By comparing the disparate views of XAI and regulation, we arrive at four axes where practical work could bridge the transparency gap.
arXiv Detail & Related papers (2023-02-21T16:06:48Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z) - TILT: A GDPR-Aligned Transparency Information Language and Toolkit for
Practical Privacy Engineering [0.0]
TILT is a transparency information language and toolkit designed to represent and process transparency information.
We provide a detailed analysis of transparency obligations to identify the required for a formal transparency language.
On this basis, we specify our formal language and present a respective, fully implemented toolkit.
arXiv Detail & Related papers (2020-12-18T18:45:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.