Conformity Assessments and Post-market Monitoring: A Guide to the Role
of Auditing in the Proposed European AI Regulation
- URL: http://arxiv.org/abs/2111.05071v1
- Date: Tue, 9 Nov 2021 11:59:47 GMT
- Title: Conformity Assessments and Post-market Monitoring: A Guide to the Role
of Auditing in the Proposed European AI Regulation
- Authors: Jakob Mokander, Maria Axente, Federico Casolari, Luciano Floridi
- Abstract summary: We describe and discuss the two primary enforcement mechanisms proposed in the European Artificial Intelligence Act.
We argue that AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proposed European Artificial Intelligence Act (AIA) is the first attempt
to elaborate a general legal framework for AI carried out by any major global
economy. As such, the AIA is likely to become a point of reference in the
larger discourse on how AI systems can (and should) be regulated. In this
article, we describe and discuss the two primary enforcement mechanisms
proposed in the AIA: the conformity assessments that providers of high-risk AI
systems are expected to conduct, and the post-market monitoring plans that
providers must establish to document the performance of high-risk AI systems
throughout their lifetimes. We argue that AIA can be interpreted as a proposal
to establish a Europe-wide ecosystem for conducting AI auditing, albeit in
other words. Our analysis offers two main contributions. First, by describing
the enforcement mechanisms included in the AIA in terminology borrowed from
existing literature on AI auditing, we help providers of AI systems understand
how they can prove adherence to the requirements set out in the AIA in
practice. Second, by examining the AIA from an auditing perspective, we seek to
provide transferable lessons from previous research about how to refine further
the regulatory approach outlined in the AIA. We conclude by highlighting seven
aspects of the AIA where amendments (or simply clarifications) would be
helpful. These include, above all, the need to translate vague concepts into
verifiable criteria and to strengthen the institutional safeguards concerning
conformity assessments based on internal checks.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - How Could Generative AI Support Compliance with the EU AI Act? A Review for Safe Automated Driving Perception [4.075971633195745]
Deep Neural Networks (DNNs) have become central for the perception functions of autonomous vehicles.
The European Union (EU) Artificial Intelligence (AI) Act aims to address these challenges by establishing stringent norms and standards for AI systems.
This review paper summarizes the requirements arising from the EU AI Act regarding DNN-based perception systems and systematically categorizes existing generative AI applications in AD.
arXiv Detail & Related papers (2024-08-30T12:01:06Z) - A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities [2.6517270606061203]
Article explains how the EU's new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies.
It proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA.
It investigates their roles across supranational and national levels, emphasizing how EU regulations influence institutional structures and operations.
arXiv Detail & Related papers (2024-05-11T09:22:16Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - Addressing the Regulatory Gap: Moving Towards an EU AI Audit Ecosystem Beyond the AIA by Including Civil Society [4.9873153106566575]
The European legislature has proposed the Digital Services Act (DSA) and Artificial Intelligence Act (AIA) to regulate platforms and AI products.
We review to what extent third-party audits are part of both laws and to what extent access to models and data is provided.
We identify a regulatory gap in that the Artificial Intelligence Act does not provide access to data for researchers and civil society.
arXiv Detail & Related papers (2024-02-26T11:32:42Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Advancing AI Audits for Enhanced AI Governance [1.875782323187985]
This policy recommendation summarizes the issues related to the auditing of AI services and systems.
It presents three recommendations for promoting AI auditing that contribute to sound AI governance.
arXiv Detail & Related papers (2023-11-26T16:18:17Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.