Doing Audits Right? The Role of Sampling and Legal Content Analysis in Systemic Risk Assessments and Independent Audits in the Digital Services Act
- URL: http://arxiv.org/abs/2505.03601v1
- Date: Tue, 06 May 2025 15:02:54 GMT
- Title: Doing Audits Right? The Role of Sampling and Legal Content Analysis in Systemic Risk Assessments and Independent Audits in the Digital Services Act
- Authors: Marie-Therese Sekwenz, Rita Gsenger, Scott Dahlgren, Ben Wagner,
- Abstract summary: The European Union's Digital Services Act (DSA) requires online platforms to undergo internal and external audits.<n>This article evaluates the strengths and limitations of different qualitative and quantitative methods for auditing systemic risks.<n>We argue that content sampling, combined with legal and empirical analysis, offers a viable method for risk-specific audits.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: A central requirement of the European Union's Digital Services Act (DSA) is that online platforms undergo internal and external audits. A key component of these audits is the assessment of systemic risks, including the dissemination of illegal content, threats to fundamental rights, impacts on democratic processes, and gender-based violence. The DSA Delegated Regulation outlines how such audits should be conducted, setting expectations for both platforms and auditors. This article evaluates the strengths and limitations of different qualitative and quantitative methods for auditing these systemic risks and proposes a mixed-method approach for DSA compliance. We argue that content sampling, combined with legal and empirical analysis, offers a viable method for risk-specific audits. First, we examine relevant legal provisions on sample selection for audit purposes. We then assess sampling techniques and methods suitable for detecting systemic risks, focusing on how representativeness can be understood across disciplines. Finally, we review initial systemic risk assessment reports submitted by platforms, analyzing their testing and sampling methodologies. By proposing a structured, mixed-method approach tailored to specific risk categories and platform characteristics, this article addresses the challenge of evidence-based audits under the DSA. Our contribution emphasizes the need for adaptable, context-sensitive auditing strategies and adds to the emerging field of DSA compliance research.
Related papers
- From Reports to Reality: Testing Consistency in Instagram's Digital Services Act Compliance Data [0.0]
The Digital Services Act (DSA) introduces rules for content moderation and platform governance in the European Union.<n>This study examined compliance with DSA requirements, focusing on Instagram.<n>We develop and apply a multi-level consistency framework to evaluate DSA compliance.
arXiv Detail & Related papers (2025-07-02T15:13:25Z) - Determining Absence of Unreasonable Risk: Approval Guidelines for an Automated Driving System Deployment [1.2499098866326646]
This paper provides an overview of how the determination of absence of unreasonable risk can be operationalized.<n> Readiness determination is, at its core, a risk assessment process.<n>The paper proposes methodological criteria to ground the readiness review process for an ADS release.
arXiv Detail & Related papers (2025-05-15T00:52:09Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.<n>The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Assessing the Auditability of AI-integrating Systems: A Framework and Learning Analytics Case Study [0.0]
We argue that the efficacy of an audit depends on the auditability of the audited system.
We present a framework for assessing the auditability of AI-integrating systems.
arXiv Detail & Related papers (2024-10-29T13:43:21Z) - From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing [1.196505602609637]
Audits can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing.
There are many operational challenges to AI auditing that complicate its implementation.
We argue that auditing can be cast as a natural hypothesis test, draw parallels hypothesis testing and legal procedure, and argue that this framing provides clear and interpretable guidance on audit implementation.
arXiv Detail & Related papers (2024-10-07T06:15:46Z) - An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - The Decisive Power of Indecision: Low-Variance Risk-Limiting Audits and Election Contestation via Marginal Mark Recording [51.82772358241505]
Risk-limiting audits (RLAs) are techniques for verifying the outcomes of large elections.
We define new families of audits that improve efficiency and offer advances in statistical power.
New audits are enabled by revisiting the standard notion of a cast-vote record so that it can declare multiple possible mark interpretations.
arXiv Detail & Related papers (2024-02-09T16:23:54Z) - A Framework for Assurance Audits of Algorithmic Systems [2.2342503377379725]
We propose the criterion audit as an operationalizable compliance and assurance external audit framework.
We argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that harms and uphold human values.
We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing.
arXiv Detail & Related papers (2024-01-26T14:38:54Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - System Cards for AI-Based Decision-Making for Public Policy [5.076419064097733]
This work proposes a system accountability benchmark for formal audits of artificial intelligence-based decision-aiding systems.
It consists of 56 criteria organized within a four-by-four matrix composed of rows focused on (i) data, (ii) model, (iii) code, (iv) system, and columns focused on (a) development, (b) assessment, (c) mitigation, and (d) assurance.
arXiv Detail & Related papers (2022-03-01T18:56:45Z) - Fairness Evaluation in Presence of Biased Noisy Labels [84.12514975093826]
We propose a sensitivity analysis framework for assessing how assumptions on the noise across groups affect the predictive bias properties of the risk assessment model.
Our experimental results on two real world criminal justice data sets demonstrate how even small biases in the observed labels may call into question the conclusions of an analysis based on the noisy outcome.
arXiv Detail & Related papers (2020-03-30T20:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.