The risks of risk-based AI regulation: taking liability seriously
- URL: http://arxiv.org/abs/2311.14684v1
- Date: Fri, 3 Nov 2023 12:51:37 GMT
- Title: The risks of risk-based AI regulation: taking liability seriously
- Authors: Martin Kretschmer, Tobias Kretschmer, Alexander Peukert, Christian
Peukert
- Abstract summary: The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
- Score: 46.90451304069951
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development and regulation of multi-purpose, large "foundation models" of
AI seems to have reached a critical stage, with major investments and new
applications announced every other day. Some experts are calling for a
moratorium on the training of AI systems more powerful than GPT-4. Legislators
globally compete to set the blueprint for a new regulatory regime. This paper
analyses the most advanced legal proposal, the European Union's AI Act
currently in the stage of final "trilogue" negotiations between the EU
institutions. This legislation will likely have extra-territorial implications,
sometimes called "the Brussels effect". It also constitutes a radical departure
from conventional information and communications technology policy by
regulating AI ex-ante through a risk-based approach that seeks to prevent
certain harmful outcomes based on product safety principles. We offer a review
and critique, specifically discussing the AI Act's problematic obligations
regarding data quality and human oversight. Our proposal is to take liability
seriously as the key regulatory mechanism. This signals to industry that if a
breach of law occurs, firms are required to know in particular what their
inputs were and how to retrain the system to remedy the breach. Moreover, we
suggest differentiating between endogenous and exogenous sources of potential
harm, which can be mitigated by carefully allocating liability between
developers and deployers of AI technology.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - How Could Generative AI Support Compliance with the EU AI Act? A Review for Safe Automated Driving Perception [4.075971633195745]
Deep Neural Networks (DNNs) have become central for the perception functions of autonomous vehicles.
The European Union (EU) Artificial Intelligence (AI) Act aims to address these challenges by establishing stringent norms and standards for AI systems.
This review paper summarizes the requirements arising from the EU AI Act regarding DNN-based perception systems and systematically categorizes existing generative AI applications in AD.
arXiv Detail & Related papers (2024-08-30T12:01:06Z) - An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence [0.0]
We explore the applicability of approval regulation -- that is, regulation of a product that combines experimental minima with government licensure conditioned partially or fully upon that experimentation -- to the regulation of frontier AI.
There are a number of reasons to believe that approval regulation, simplistically applied, would be inapposite for frontier AI risks.
We conclude by highlighting the role of policy learning and experimentation in regulatory development.
arXiv Detail & Related papers (2024-08-01T17:54:57Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Regulating Chatbot Output via Inter-Informational Competition [8.168523242105763]
This Article develops a yardstick for reevaluating both AI-related content risks and corresponding regulatory proposals.
It argues that sufficient competition among information outlets in the information marketplace can sufficiently mitigate and even resolve most content risks posed by generative AI technologies.
arXiv Detail & Related papers (2024-03-17T00:11:15Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Regulation and NLP (RegNLP): Taming Large Language Models [51.41095330188972]
We argue how NLP research can benefit from proximity to regulatory studies and adjacent fields.
We advocate for the development of a new multidisciplinary research space on regulation and NLP.
arXiv Detail & Related papers (2023-10-09T09:22:40Z) - AI Regulation in Europe: From the AI Act to Future Regulatory Challenges [3.0821115746307663]
It argues for a hybrid regulatory strategy that combines elements from both philosophies.
The paper examines the AI Act as a pioneering legislative effort to address the multifaceted challenges posed by AI.
It advocates for immediate action to create protocols for regulated access to high-performance, potentially open-source AI systems.
arXiv Detail & Related papers (2023-10-06T07:52:56Z) - Quantitative study about the estimated impact of the AI Act [0.0]
We suggest a systematic approach that we applied on the initial draft of the AI Act that has been released in April 2021.
We went through several iterations of compiling the list of AI products and projects in and from Germany, which the Lernende Systeme platform lists.
It turns out that only about 30% of the AI systems considered would be regulated by the AI Act, the rest would be classified as low-risk.
arXiv Detail & Related papers (2023-03-29T06:23:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.