The ghost of AI governance past, present and future: AI governance in
the European Union
- URL: http://arxiv.org/abs/2107.14099v1
- Date: Thu, 8 Jul 2021 08:43:16 GMT
- Title: The ghost of AI governance past, present and future: AI governance in
the European Union
- Authors: Charlotte Stix
- Abstract summary: The EU ensures and encourages ethical, trustworthy and reliable technological development.
Section 1 serves to explore and evidence the EUs coherent and comprehensive approach to AI governance.
Section 2 maps the EUs drive towards digital sovereignty through the lens of regulation and infrastructure.
Section 3 concludes by offering several considerations to achieve good AI governance in the EU.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The received wisdom is that artificial intelligence (AI) is a competition
between the US and China. In this chapter, the author will examine how the
European Union (EU) fits into that mix and what it can offer as a third way to
govern AI. The chapter presents this by exploring the past, present and future
of AI governance in the EU. Section 1 serves to explore and evidence the EUs
coherent and comprehensive approach to AI governance. In short, the EU ensures
and encourages ethical, trustworthy and reliable technological development.
This will cover a range of key documents and policy tools that lead to the most
crucial effort of the EU to date: to regulate AI. Section 2 maps the EUs drive
towards digital sovereignty through the lens of regulation and infrastructure.
This covers topics such as the trustworthiness of AI systems, cloud, compute
and foreign direct investment. In Section 3, the chapter concludes by offering
several considerations to achieve good AI governance in the EU.
Related papers
- AI, Global Governance, and Digital Sovereignty [1.3976439685325095]
We argue that AI systems will embed in global governance to create dueling dynamics of public/private cooperation and contestation.
We conclude by sketching future directions for IR research on AI and global governance.
arXiv Detail & Related papers (2024-10-23T00:05:33Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success [4.202570851109354]
The EU has enacted the AI Act, regulating market access for AI-based systems.
The Act focuses regulation on transparency, explainability, and the human ability to understand and control AI systems.
The EU issues a democratic call for human-centered AI systems and, in turn, an interdisciplinary research agenda for human-centered innovation in AI development.
arXiv Detail & Related papers (2024-02-22T17:35:29Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - AI Regulation in Europe: From the AI Act to Future Regulatory Challenges [3.0821115746307663]
It argues for a hybrid regulatory strategy that combines elements from both philosophies.
The paper examines the AI Act as a pioneering legislative effort to address the multifaceted challenges posed by AI.
It advocates for immediate action to create protocols for regulated access to high-performance, potentially open-source AI systems.
arXiv Detail & Related papers (2023-10-06T07:52:56Z) - The European AI Liability Directives -- Critique of a Half-Hearted
Approach and Lessons for the Future [0.0]
The European Commission advanced two proposals outlining the European approach to AI liability in September 2022.
The latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment.
Taken together, these acts may well trigger a Brussels Effect in AI regulation, with significant consequences for the US and beyond.
I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime.
arXiv Detail & Related papers (2022-11-25T09:08:11Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.