Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open
Challenges and Interdisciplinary Research Directions
- URL: http://arxiv.org/abs/2310.19775v1
- Date: Mon, 30 Oct 2023 17:44:55 GMT
- Title: Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open
Challenges and Interdisciplinary Research Directions
- Authors: Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto
Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco
Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue,
Gianclaudio Malgieri, Andr\'es P\'aez, Wojciech Samek, Johannes Schneider,
Timo Speith, Simone Stumpf
- Abstract summary: Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains.
This paper highlights the advancements in XAI and its application in real-world scenarios.
We present a manifesto of 27 open problems categorized into nine categories.
- Score: 37.640819269544934
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: As systems based on opaque Artificial Intelligence (AI) continue to flourish
in diverse real-world applications, understanding these black box models has
become paramount. In response, Explainable AI (XAI) has emerged as a field of
research with practical and ethical benefits across various domains. This paper
not only highlights the advancements in XAI and its application in real-world
scenarios but also addresses the ongoing challenges within XAI, emphasizing the
need for broader perspectives and collaborative efforts. We bring together
experts from diverse fields to identify open problems, striving to synchronize
research agendas and accelerate XAI in practical applications. By fostering
collaborative discussion and interdisciplinary cooperation, we aim to propel
XAI forward, contributing to its continued success. Our goal is to put forward
a comprehensive proposal for advancing XAI. To achieve this goal, we present a
manifesto of 27 open problems categorized into nine categories. These
challenges encapsulate the complexities and nuances of XAI and offer a road map
for future research. For each problem, we provide promising research directions
in the hope of harnessing the collective intelligence of interested
stakeholders.
Related papers
- Aligning Cyber Space with Physical World: A Comprehensive Survey on Embodied AI [129.08019405056262]
Embodied Artificial Intelligence (Embodied AI) is crucial for achieving Artificial Intelligence (AGI)
MLMs andWMs have attracted significant attention due to their remarkable perception, interaction, and reasoning capabilities.
In this survey, we give a comprehensive exploration of the latest advancements in Embodied AI.
arXiv Detail & Related papers (2024-07-09T14:14:47Z) - OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI [73.75520820608232]
We introduce OlympicArena, which includes 11,163 bilingual problems across both text-only and interleaved text-image modalities.
These challenges encompass a wide range of disciplines spanning seven fields and 62 international Olympic competitions, rigorously examined for data leakage.
Our evaluations reveal that even advanced models like GPT-4o only achieve a 39.97% overall accuracy, illustrating current AI limitations in complex reasoning and multimodal integration.
arXiv Detail & Related papers (2024-06-18T16:20:53Z) - Applications of Explainable artificial intelligence in Earth system science [12.454478986296152]
This review aims to provide a foundational understanding of explainable AI (XAI)
XAI offers a set of powerful tools that make the models more transparent.
We identify four significant challenges that XAI faces within the Earth system science (ESS)
A visionary outlook for ESS envisions a harmonious blend where process-based models govern the known, AI models explore the unknown, and XAI bridges the gap by providing explanations.
arXiv Detail & Related papers (2024-06-12T15:05:29Z) - Explainable Generative AI (GenXAI): A Survey, Conceptualization, and Research Agenda [1.8592384822257952]
We elaborate on why XAI has gained importance with the rise of GenAI and its challenges for explainability research.
We also unveil novel and emerging desiderata that explanations should fulfill, covering aspects such as verifiability, interactivity, security, and cost.
arXiv Detail & Related papers (2024-04-15T08:18:16Z) - Do We Need Explainable AI in Companies? Investigation of Challenges,
Expectations, and Chances from Employees' Perspective [0.8057006406834467]
Using AI poses new requirements for companies and their employees, including transparency and comprehensibility of AI systems.
The field of Explainable AI (XAI) aims to address these issues.
This project report paper provides insights into employees' needs and attitudes towards (X)AI.
arXiv Detail & Related papers (2022-10-07T13:11:28Z) - XAI for Cybersecurity: State of the Art, Challenges, Open Issues and
Future Directions [16.633632244131775]
AI models often appear as a blackbox wherein developers are unable to explain or trace back the reasoning behind a specific decision.
Explainable AI (XAI) is a rapid growing field of research which helps to extract information and also visualize the results.
The paper provides a brief overview on cybersecurity and the various forms of attack.
Then the use of traditional AI techniques and its associated challenges are discussed which opens its doors towards use of XAI in various applications.
arXiv Detail & Related papers (2022-06-03T02:15:30Z) - Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and
Future Opportunities [0.0]
Explainable AI (XAI) has been proposed to make AI more transparent and thus advance the adoption of AI in critical domains.
This study presents a systematic meta-survey for challenges and future research directions in XAI.
arXiv Detail & Related papers (2021-11-11T19:06:13Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.