Quantitative study about the estimated impact of the AI Act
- URL: http://arxiv.org/abs/2304.06503v1
- Date: Wed, 29 Mar 2023 06:23:16 GMT
- Title: Quantitative study about the estimated impact of the AI Act
- Authors: Marc P. Hauer and Tobias D Krafft and Dr. Andreas Sesing-Wagenpfeil
and Prof. Katharina Zweig
- Abstract summary: We suggest a systematic approach that we applied on the initial draft of the AI Act that has been released in April 2021.
We went through several iterations of compiling the list of AI products and projects in and from Germany, which the Lernende Systeme platform lists.
It turns out that only about 30% of the AI systems considered would be regulated by the AI Act, the rest would be classified as low-risk.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the Proposal for a Regulation laying down harmonised rules on Artificial
Intelligence (AI Act) the European Union provides the first regulatory document
that applies to the entire complex of AI systems. While some fear that the
regulation leaves too much room for interpretation and thus bring little
benefit to society, others expect that the regulation is too restrictive and,
thus, blocks progress and innovation, as well as hinders the economic success
of companies within the EU. Without a systematic approach, it is difficult to
assess how it will actually impact the AI landscape. In this paper, we suggest
a systematic approach that we applied on the initial draft of the AI Act that
has been released in April 2021. We went through several iterations of
compiling the list of AI products and projects in and from Germany, which the
Lernende Systeme platform lists, and then classified them according to the AI
Act together with experts from the fields of computer science and law. Our
study shows a need for more concrete formulation, since for some provisions it
is often unclear whether they are applicable in a specific case or not. Apart
from that, it turns out that only about 30\% of the AI systems considered would
be regulated by the AI Act, the rest would be classified as low-risk. However,
as the database is not representative, the results only provide a first
assessment. The process presented can be applied to any collections, and also
repeated when regulations are about to change. This allows fears of over- or
under-regulation to be investigated before the regulations comes into effect.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - The Artificial Intelligence Act: critical overview [0.0]
This article provides a critical overview of the recently approved Artificial Intelligence Act.
It starts by presenting the main structure, objectives, and approach of Regulation (EU) 2024/1689.
The text concludes that even if the overall framework can be deemed adequate and balanced, the approach is so complex that it risks defeating its own purpose.
arXiv Detail & Related papers (2024-08-30T21:38:02Z) - An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence [0.0]
We explore the applicability of approval regulation -- that is, regulation of a product that combines experimental minima with government licensure conditioned partially or fully upon that experimentation -- to the regulation of frontier AI.
There are a number of reasons to believe that approval regulation, simplistically applied, would be inapposite for frontier AI risks.
We conclude by highlighting the role of policy learning and experimentation in regulatory development.
arXiv Detail & Related papers (2024-08-01T17:54:57Z) - How VADER is your AI? Towards a definition of artificial intelligence
systems appropriate for regulation [41.94295877935867]
Recent AI regulation proposals adopt AI definitions affecting ICT techniques, approaches, and systems that are not AI.
We propose a framework to score how validated as appropriately-defined for regulation (VADER) an AI definition is.
arXiv Detail & Related papers (2024-02-07T17:41:15Z) - Federated Learning Priorities Under the European Union Artificial
Intelligence Act [68.44894319552114]
We perform a first-of-its-kind interdisciplinary analysis (legal and ML) of the impact the AI Act may have on Federated Learning.
We explore data governance issues and the concern for privacy.
Most noteworthy are the opportunities to defend against data bias and enhance private and secure computation.
arXiv Detail & Related papers (2024-02-05T19:52:19Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Demystifying the Draft EU Artificial Intelligence Act [5.787117733071415]
In April 2021, the European Commission proposed a Regulation on Artificial Intelligence, known as the AI Act.
We present an overview of the Act and analyse its implications, drawing on scholarship ranging from the study of contemporary AI practices to the structure of EU product safety regimes over the last four decades.
We find that some provisions of the draft AI Act have surprising legal implications, whilst others may be largely ineffective at achieving their stated goals.
arXiv Detail & Related papers (2021-07-08T10:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.