Beyond Explainable AI (XAI): An Overdue Paradigm Shift and Post-XAI Research Directions
- URL: http://arxiv.org/abs/2602.24176v1
- Date: Fri, 27 Feb 2026 16:58:27 GMT
- Title: Beyond Explainable AI (XAI): An Overdue Paradigm Shift and Post-XAI Research Directions
- Authors: Saleh Afroogh, Seyd Ishtiaque Ahmed, Petra Ahrweiler, David Alvarez-Melis, Mansur Maturidi Arief, Emilia Barakova, Falco J. Bargagli-Stoffi, Erdem Biyik, Hanjie Chen, Xiang 'Anthony' Chen, Robert Clements, Keeley Crockett, Amit Dhurandhar, Fethiye Irmak Dogan, Mollie Dollinger, Motahhare Eslami, Aldo A Faisal, Arya Farahi, Melanie Fernandez Pradie, Saadia Gabrie, Diego Garcia-Olano, Marzyeh Ghassemi, Shaona Ghosh, Hatice Gunes, Ehsan Hajiramezanali, Stefan Haufe, Biwei Huang, Angel Hwang, Md Tauhidul Islam, Junfeng Jiao, Amir-Hossein Karimi, Saber Kazeminasab, Anastasia Kuzminykh, William La Cava, Brian Y. Lim, Xiaofeng Liu, Mohammad R. K. Mofrad, Alicia Parrish, Maria Perez-Ortiz, Shriti Raj, Swabha Swayamdipta, Salmon Talebi, Kush R. Varshney, Mihaela Vorvoreanu, Lily Weng, Alice Xiang, Yiming Xu, Ding Zhao, Jieyu Zhao,
- Abstract summary: This study focuses on Explainable Artificial Intelligence (XAI) approaches-focusing on deep neural networks (DNNs) and large language models (LLMs)<n>We discuss critical symptoms that stem from deeper root causes (i.e., two paradoxes, two conceptual confusions, and five false assumptions)<n>To move beyond XAI's limitations, we propose a four-pronged paradigm shift toward reliable and certified AI development.
- Score: 95.59915390053588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study provides a cross-disciplinary examination of Explainable Artificial Intelligence (XAI) approaches-focusing on deep neural networks (DNNs) and large language models (LLMs)-and identifies empirical and conceptual limitations in current XAI. We discuss critical symptoms that stem from deeper root causes (i.e., two paradoxes, two conceptual confusions, and five false assumptions). These fundamental problems within the current XAI research field reveal three insights: experimentally, XAI exhibits significant flaws; conceptually, it is paradoxical; and pragmatically, further attempts to reform the paradoxical XAI might exacerbate its confusion-demanding fundamental shifts and new research directions. To move beyond XAI's limitations, we propose a four-pronged synthesized paradigm shift toward reliable and certified AI development. These four components include: verification-focused Interactive AI (IAI) to establish scientific community protocols for certifying AI system performance rather than attempting post-hoc explanations, AI Epistemology for rigorous scientific foundations, User-Sensible AI to create context-aware systems tailored to specific user communities, and Model-Centered Interpretability for faithful technical analysis-together offering comprehensive post-XAI research directions.
Related papers
- Explainable AI as a Double-Edged Sword in Dermatology: The Impact on Clinicians versus The Public [46.86429592892395]
explainable AI (XAI) addresses this by providing AI decision-making insight.<n>We present results from two large-scale experiments combining a fairness-based diagnosis AI model and different XAI explanations.
arXiv Detail & Related papers (2025-12-14T00:06:06Z) - A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust [2.4578723416255754]
Human-Centered AI (HCAI) emphasizes alignment with human values, while Explainable AI (XAI) enhances transparency by making AI decisions more understandable.<n>This paper presents a novel three-layered framework that bridges HCAI and XAI to establish a structured explainability paradigm.<n>Our findings advance Human-Centered Explainable AI (HCXAI), fostering AI systems that are transparent, adaptable, and ethically aligned.
arXiv Detail & Related papers (2025-04-14T01:29:30Z) - Applications of Explainable artificial intelligence in Earth system science [12.454478986296152]
This review aims to provide a foundational understanding of explainable AI (XAI)
XAI offers a set of powerful tools that make the models more transparent.
We identify four significant challenges that XAI faces within the Earth system science (ESS)
A visionary outlook for ESS envisions a harmonious blend where process-based models govern the known, AI models explore the unknown, and XAI bridges the gap by providing explanations.
arXiv Detail & Related papers (2024-06-12T15:05:29Z) - Evolutionary Computation and Explainable AI: A Roadmap to Understandable Intelligent Systems [37.02462866600066]
Evolutionary computation (EC) offers significant potential to contribute to explainable AI (XAI)
This paper provides an introduction to XAI and reviews current techniques for explaining machine learning models.
We then explore how EC can be leveraged in XAI and examine existing XAI approaches that incorporate EC techniques.
arXiv Detail & Related papers (2024-06-12T02:06:24Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - The role of causality in explainable artificial intelligence [1.049712834719005]
Causality and eXplainable Artificial Intelligence (XAI) have developed as separate fields in computer science.
We investigate the literature to try to understand how and to what extent causality and XAI are intertwined.
arXiv Detail & Related papers (2023-09-18T16:05:07Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Human-Centered Explainable AI (XAI): From Algorithms to User Experiences [29.10123472973571]
explainable AI (XAI) has produced a vast collection of algorithms in recent years.
The field is starting to embrace inter-disciplinary perspectives and human-centered approaches.
arXiv Detail & Related papers (2021-10-20T21:33:46Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.