The Brussels Effect and Artificial Intelligence: How EU regulation will
impact the global AI market
- URL: http://arxiv.org/abs/2208.12645v1
- Date: Tue, 23 Aug 2022 20:23:22 GMT
- Title: The Brussels Effect and Artificial Intelligence: How EU regulation will
impact the global AI market
- Authors: Charlotte Siegmann and Markus Anderljung
- Abstract summary: We ask whether the EU's upcoming regulation for AI will diffuse globally, producing a so-called "Brussels Effect"
We consider both the possibility that the EU's AI regulation will incentivise changes in products offered in non-EU countries.
A de facto effect is particularly likely to arise in large US tech companies with AI systems that the AI Act terms "high-risk"
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The European Union is likely to introduce among the first, most stringent,
and most comprehensive AI regulatory regimes of the world's major
jurisdictions. In this report, we ask whether the EU's upcoming regulation for
AI will diffuse globally, producing a so-called "Brussels Effect". Building on
and extending Anu Bradford's work, we outline the mechanisms by which such
regulatory diffusion may occur. We consider both the possibility that the EU's
AI regulation will incentivise changes in products offered in non-EU countries
(a de facto Brussels Effect) and the possibility it will influence regulation
adopted by other jurisdictions (a de jure Brussels Effect). Focusing on the
proposed EU AI Act, we tentatively conclude that both de facto and de jure
Brussels effects are likely for parts of the EU regulatory regime. A de facto
effect is particularly likely to arise in large US tech companies with AI
systems that the AI Act terms "high-risk". We argue that the upcoming
regulation might be particularly important in offering the first and most
influential operationalisation of what it means to develop and deploy
trustworthy or human-centred AI. If the EU regime is likely to see significant
diffusion, ensuring it is well-designed becomes a matter of global importance.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - First Analysis of the EU Artifical Intelligence Act: Towards a Global Standard for Trustworthy AI? [0.0]
The EU Artificial Intelligence Act (AI Act) came into force in the European Union (EU) on 1 August 2024.
It is a key piece of legislation both for the citizens at the heart of AI technologies and for the industry active in the internal market.
While the Act is unprecedented on an international scale in terms of its horizontal and binding regulatory scope, its global appeal in support of trustworthy AI is one of its major challenges.
arXiv Detail & Related papers (2024-07-31T12:16:03Z) - Federated Learning Priorities Under the European Union Artificial
Intelligence Act [68.44894319552114]
We perform a first-of-its-kind interdisciplinary analysis (legal and ML) of the impact the AI Act may have on Federated Learning.
We explore data governance issues and the concern for privacy.
Most noteworthy are the opportunities to defend against data bias and enhance private and secure computation.
arXiv Detail & Related papers (2024-02-05T19:52:19Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Regulation in Europe: From the AI Act to Future Regulatory Challenges [3.0821115746307663]
It argues for a hybrid regulatory strategy that combines elements from both philosophies.
The paper examines the AI Act as a pioneering legislative effort to address the multifaceted challenges posed by AI.
It advocates for immediate action to create protocols for regulated access to high-performance, potentially open-source AI systems.
arXiv Detail & Related papers (2023-10-06T07:52:56Z) - Quantitative study about the estimated impact of the AI Act [0.0]
We suggest a systematic approach that we applied on the initial draft of the AI Act that has been released in April 2021.
We went through several iterations of compiling the list of AI products and projects in and from Germany, which the Lernende Systeme platform lists.
It turns out that only about 30% of the AI systems considered would be regulated by the AI Act, the rest would be classified as low-risk.
arXiv Detail & Related papers (2023-03-29T06:23:16Z) - The European AI Liability Directives -- Critique of a Half-Hearted
Approach and Lessons for the Future [0.0]
The European Commission advanced two proposals outlining the European approach to AI liability in September 2022.
The latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment.
Taken together, these acts may well trigger a Brussels Effect in AI regulation, with significant consequences for the US and beyond.
I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime.
arXiv Detail & Related papers (2022-11-25T09:08:11Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.