Unlocking the Black Box: Analysing the EU Artificial Intelligence Act's Framework for Explainability in AI
- URL: http://arxiv.org/abs/2502.14868v1
- Date: Fri, 24 Jan 2025 16:30:19 GMT
- Title: Unlocking the Black Box: Analysing the EU Artificial Intelligence Act's Framework for Explainability in AI
- Authors: Georgios Pavlidis,
- Abstract summary: The need for eXplainable AI (XAI) is evident in fields such as healthcare, credit scoring, policing and the criminal justice system.<n>At the EU level, the notion of explainability is one of the fundamental principles that underpin the AI Act.<n>This paper explores various approaches and techniques that promise to advance XAI, as well as the challenges of implementing the principle of explainability in AI governance and policies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The lack of explainability of Artificial Intelligence (AI) is one of the first obstacles that the industry and regulators must overcome to mitigate the risks associated with the technology. The need for eXplainable AI (XAI) is evident in fields where accountability, ethics and fairness are critical, such as healthcare, credit scoring, policing and the criminal justice system. At the EU level, the notion of explainability is one of the fundamental principles that underpin the AI Act, though the exact XAI techniques and requirements are still to be determined and tested in practice. This paper explores various approaches and techniques that promise to advance XAI, as well as the challenges of implementing the principle of explainability in AI governance and policies. Finally, the paper examines the integration of XAI into EU law, emphasising the issues of standard setting, oversight, and enforcement.
Related papers
- Decoding the Black Box: Integrating Moral Imagination with Technical AI Governance [0.0]
We develop a comprehensive framework designed to regulate AI technologies deployed in high-stakes domains such as defense, finance, healthcare, and education.
Our approach combines rigorous technical analysis, quantitative risk assessment, and normative evaluation to expose systemic vulnerabilities.
arXiv Detail & Related papers (2025-03-09T03:11:32Z) - Compliance of AI Systems [0.0]
This paper systematically examines the compliance of AI systems with relevant legislation, focusing on the EU's AI Act.
The analysis highlighted many challenges associated with edge devices, which are increasingly being used to deploy AI applications closer and closer to the data sources.
The importance of data set compliance is highlighted as a cornerstone for ensuring the trustworthiness, transparency, and explainability of AI systems.
arXiv Detail & Related papers (2025-03-07T16:53:36Z) - The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)<n>This article outlines the main building blocks of a model template for the FRIA.<n>It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - The Artificial Intelligence Act: critical overview [0.0]
This article provides a critical overview of the recently approved Artificial Intelligence Act.
It starts by presenting the main structure, objectives, and approach of Regulation (EU) 2024/1689.
The text concludes that even if the overall framework can be deemed adequate and balanced, the approach is so complex that it risks defeating its own purpose.
arXiv Detail & Related papers (2024-08-30T21:38:02Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - False Sense of Security in Explainable Artificial Intelligence (XAI) [3.298597939573779]
We argue that AI regulations and current market conditions threaten effective AI governance and safety.
Unless governments explicitly tackle the issue of explainability through clear legislative and policy statements, AI governance risks becoming a vacuous "box-ticking" exercise.
arXiv Detail & Related papers (2024-05-06T20:02:07Z) - How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law [0.20971479389679337]
This paper focuses on European (and part German) law, although with international concepts and regulations.
Based on XAI-taxonomies, requirements for XAI (methods) are derived from each of the legal bases.
arXiv Detail & Related papers (2024-04-19T10:08:28Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
Act? [0.8287206589886881]
European Union has introduced detailed requirements of transparency for AI systems.
There is a fundamental difference between XAI and the Act regarding what transparency is.
By comparing the disparate views of XAI and regulation, we arrive at four axes where practical work could bridge the transparency gap.
arXiv Detail & Related papers (2023-02-21T16:06:48Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.