The EU AI Act, Stakeholder Needs, and Explainable AI: Aligning Regulatory Compliance in a Clinical Decision Support System
- URL: http://arxiv.org/abs/2505.20311v1
- Date: Thu, 22 May 2025 07:39:04 GMT
- Title: The EU AI Act, Stakeholder Needs, and Explainable AI: Aligning Regulatory Compliance in a Clinical Decision Support System
- Authors: Anton Hummel, Håkan Burden, Susanne Stenberg, Jan-Philipp Steghöfer, Niklas Kühl,
- Abstract summary: XAI aims to enhance transparency and human oversight of AI systems.<n>The AI Act focuses on the obligations of the provider and deployer of the AI system.<n>We show that XAI techniques can fill a gap between stakeholder needs and the requirements of the AI Act.
- Score: 5.6739502570965765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable AI (XAI) is a promising solution to ensure compliance with the EU AI Act, the first multi-national regulation for AI. XAI aims to enhance transparency and human oversight of AI systems, particularly ``black-box models'', which are criticized as incomprehensible. However, the discourse around the main stakeholders in the AI Act and XAI appears disconnected. While XAI prioritizes the end user's needs as the primary goal, the AI Act focuses on the obligations of the provider and deployer of the AI system. We aim to bridge this divide and provide guidance on how these two worlds are related. By fostering an interdisciplinary discussion in a cross-functional team with XAI, AI Act, legal, and requirements engineering experts, we walk through the steps necessary to analyze an AI-based clinical decision support system to clarify the end-user needs and assess AI Act applicability. By analyzing our justified understanding using an AI system under development as a case, we show that XAI techniques can fill a gap between stakeholder needs and the requirements of the AI Act. We look at the similarities and contrasts between the legal requirements and the needs of stakeholders. In doing so, we encourage researchers and practitioners from the XAI community to reflect on their role towards the AI Act by achieving a mutual understanding of the implications of XAI and the AI Act within different disciplines.
Related papers
- The AI Pentad, the CHARME$^{2}$D Model, and an Assessment of Current-State AI Regulation [5.231576332164012]
This paper aims to establish a unifying model for AI regulation from the perspective of core AI components.<n>We first introduce the AI Pentad, which comprises the five essential components of AI.<n>We then review AI regulatory enablers, including AI registration and disclosure, AI monitoring, and AI enforcement mechanisms.
arXiv Detail & Related papers (2025-03-08T22:58:41Z) - Unlocking the Black Box: Analysing the EU Artificial Intelligence Act's Framework for Explainability in AI [0.0]
The need for eXplainable AI (XAI) is evident in fields such as healthcare, credit scoring, policing and the criminal justice system.<n>At the EU level, the notion of explainability is one of the fundamental principles that underpin the AI Act.<n>This paper explores various approaches and techniques that promise to advance XAI, as well as the challenges of implementing the principle of explainability in AI governance and policies.
arXiv Detail & Related papers (2025-01-24T16:30:19Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Improving Health Professionals' Onboarding with AI and XAI for Trustworthy Human-AI Collaborative Decision Making [3.2381492754749632]
We present the findings of semi-structured interviews with health professionals and students majoring in medicine and health.
For the interviews, we built upon human-AI interaction guidelines to create materials of an AI system for stroke rehabilitation assessment.
Our findings reveal that beyond presenting traditional performance metrics on AI, participants desired benchmark information.
arXiv Detail & Related papers (2024-05-26T04:30:17Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - The European Commitment to Human-Centered Technology: The Integral Role of HCI in the EU AI Act's Success [4.202570851109354]
The EU has enacted the AI Act, regulating market access for AI-based systems.
The Act focuses regulation on transparency, explainability, and the human ability to understand and control AI systems.
The EU issues a democratic call for human-centered AI systems and, in turn, an interdisciplinary research agenda for human-centered innovation in AI development.
arXiv Detail & Related papers (2024-02-22T17:35:29Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
Act? [0.8287206589886881]
European Union has introduced detailed requirements of transparency for AI systems.
There is a fundamental difference between XAI and the Act regarding what transparency is.
By comparing the disparate views of XAI and regulation, we arrive at four axes where practical work could bridge the transparency gap.
arXiv Detail & Related papers (2023-02-21T16:06:48Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.