AI and the EU Digital Markets Act: Addressing the Risks of Bigness in
Generative AI
- URL: http://arxiv.org/abs/2308.02033v1
- Date: Fri, 7 Jul 2023 16:50:08 GMT
- Title: AI and the EU Digital Markets Act: Addressing the Risks of Bigness in
Generative AI
- Authors: Ayse Gizem Yasar, Andrew Chong, Evan Dong, Thomas Krendl Gilbert,
Sarah Hladikova, Roland Maio, Carlos Mougan, Xudong Shen, Shubham Singh,
Ana-Andreea Stoica, Savannah Thais, Miri Zilka
- Abstract summary: This paper argues for integrating certain AI software as core platform services and classifying certain developers as gatekeepers under the DMA.
As the EU considers generative AI-specific rules and possible DMA amendments, this paper provides insights towards diversity and openness in generative AI services.
- Score: 4.889410481341167
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As AI technology advances rapidly, concerns over the risks of bigness in
digital markets are also growing. The EU's Digital Markets Act (DMA) aims to
address these risks. Still, the current framework may not adequately cover
generative AI systems that could become gateways for AI-based services. This
paper argues for integrating certain AI software as core platform services and
classifying certain developers as gatekeepers under the DMA. We also propose an
assessment of gatekeeper obligations to ensure they cover generative AI
services. As the EU considers generative AI-specific rules and possible DMA
amendments, this paper provides insights towards diversity and openness in
generative AI services.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - How Could Generative AI Support Compliance with the EU AI Act? A Review for Safe Automated Driving Perception [4.075971633195745]
Deep Neural Networks (DNNs) have become central for the perception functions of autonomous vehicles.
The European Union (EU) Artificial Intelligence (AI) Act aims to address these challenges by establishing stringent norms and standards for AI systems.
This review paper summarizes the requirements arising from the EU AI Act regarding DNN-based perception systems and systematically categorizes existing generative AI applications in AD.
arXiv Detail & Related papers (2024-08-30T12:01:06Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Oversight for Frontier AI through a Know-Your-Customer Scheme for
Compute Providers [0.8547032097715571]
Know-Your-Customer (KYC) is a standard developed by the banking sector to identify and verify client identity.
KYC could provide a mechanism for greater public oversight of frontier AI development and close loopholes in existing export controls.
Unlike the strategy of limiting access to AI chip purchases, regulating the digital access to compute offers more precise controls.
arXiv Detail & Related papers (2023-10-20T16:17:29Z) - AI Regulation in Europe: From the AI Act to Future Regulatory Challenges [3.0821115746307663]
It argues for a hybrid regulatory strategy that combines elements from both philosophies.
The paper examines the AI Act as a pioneering legislative effort to address the multifaceted challenges posed by AI.
It advocates for immediate action to create protocols for regulated access to high-performance, potentially open-source AI systems.
arXiv Detail & Related papers (2023-10-06T07:52:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.