A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities
- URL: http://arxiv.org/abs/2407.10369v2
- Date: Sat, 26 Oct 2024 09:35:34 GMT
- Title: A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities
- Authors: Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal, Luciano Floridi,
- Abstract summary: Article explains how the EU's new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies.
It proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA.
It investigates their roles across supranational and national levels, emphasizing how EU regulations influence institutional structures and operations.
- Score: 2.6517270606061203
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU's new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores how the AIA may be implemented by national and EU institutional bodies, encompassing longstanding bodies, such as the European Commission, and those newly established under the AIA, such as the AI Office. It investigates their roles across supranational and national levels, emphasizing how EU regulations influence institutional structures and operations. These regulations may not only directly dictate the structural design of institutions but also indirectly request administrative capacities needed to enforce the AIA.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - The potential functions of an international institution for AI safety. Insights from adjacent policy areas and recent trends [0.0]
The OECD, the G7, the G20, UNESCO, and the Council of Europe have already started developing frameworks for ethical and responsible AI governance.
This chapter reflects on what functions an international AI safety institute could perform.
arXiv Detail & Related papers (2024-08-31T10:04:53Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - International Institutions for Advanced AI [47.449762587672986]
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity.
This paper identifies a set of governance functions that could be performed at an international level to address these challenges.
It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations.
arXiv Detail & Related papers (2023-07-10T16:55:55Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Conformity Assessments and Post-market Monitoring: A Guide to the Role
of Auditing in the Proposed European AI Regulation [0.0]
We describe and discuss the two primary enforcement mechanisms proposed in the European Artificial Intelligence Act.
We argue that AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing.
arXiv Detail & Related papers (2021-11-09T11:59:47Z) - Foundations for the Future: Institution building for the purpose of
Artificial Intelligence governance [0.0]
Governance efforts for artificial intelligence (AI) are taking on increasingly more concrete forms.
New institutions will need to be established on a national and international level.
This paper sketches a blueprint of such institutions, and conducts in-depth investigations of three key components of any future AI governance institutions.
arXiv Detail & Related papers (2021-10-01T10:45:04Z) - Demystifying the Draft EU Artificial Intelligence Act [5.787117733071415]
In April 2021, the European Commission proposed a Regulation on Artificial Intelligence, known as the AI Act.
We present an overview of the Act and analyse its implications, drawing on scholarship ranging from the study of contemporary AI practices to the structure of EU product safety regimes over the last four decades.
We find that some provisions of the draft AI Act have surprising legal implications, whilst others may be largely ineffective at achieving their stated goals.
arXiv Detail & Related papers (2021-07-08T10:04:07Z) - A Pragmatic Approach to Regulating Artificial Intelligence: A Technology
Regulator's Perspective [1.614803913005309]
We present a pragmatic approach for providing a technology assurance regulatory framework.
It is proposed that such regulation should not be mandated for all AI-based systems.
arXiv Detail & Related papers (2021-04-15T16:49:29Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.