Is Task-Agnostic Explainable AI a Myth?
- URL: http://arxiv.org/abs/2307.06963v1
- Date: Thu, 13 Jul 2023 07:48:04 GMT
- Title: Is Task-Agnostic Explainable AI a Myth?
- Authors: Alicja Chaszczewicz
- Abstract summary: Our work serves as a framework for unifying the challenges of contemporary explainable AI (XAI)
We demonstrate that while XAI methods provide supplementary and potentially useful output for machine learning models, researchers and decision-makers should be mindful of their conceptual and technical limitations.
We examine three XAI research avenues spanning image, textual, and graph data, covering saliency, attention, and graph-type explainers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Our work serves as a framework for unifying the challenges of contemporary
explainable AI (XAI). We demonstrate that while XAI methods provide
supplementary and potentially useful output for machine learning models,
researchers and decision-makers should be mindful of their conceptual and
technical limitations, which frequently result in these methods themselves
becoming black boxes. We examine three XAI research avenues spanning image,
textual, and graph data, covering saliency, attention, and graph-type
explainers. Despite the varying contexts and timeframes of the mentioned cases,
the same persistent roadblocks emerge, highlighting the need for a conceptual
breakthrough in the field to address the challenge of compatibility between XAI
methods and application tasks.
Related papers
- More Questions than Answers? Lessons from Integrating Explainable AI into a Cyber-AI Tool [1.5711133309434766]
We describe a preliminary case study on the use of XAI for source code classification.
We find that the outputs of state-of-the-art saliency explanation techniques are lost in translation when interpreted by people with little AI expertise.
We outline unaddressed gaps in practical and effective XAI, then touch on how emerging technologies like Large Language Models (LLMs) could mitigate these existing obstacles.
arXiv Detail & Related papers (2024-08-08T20:09:31Z) - Gradient based Feature Attribution in Explainable AI: A Technical Review [13.848675695545909]
Surge in black-box AI models has prompted the need to explain the internal mechanism and justify their reliability.
gradient based explanations can be directly adopted for neural network models.
We introduce both human and quantitative evaluations to measure algorithm performance.
arXiv Detail & Related papers (2024-03-15T15:49:31Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Explainable Artificial Intelligence (XAI) for Internet of Things: A
Survey [1.7205106391379026]
Black-box nature of Artificial Intelligence (AI) models do not allow users to comprehend and sometimes trust the output created by such model.
In AI applications, where not only the results but also the decision paths to the results are critical, such black-box AI models are not sufficient.
Explainable Artificial Intelligence (XAI) addresses this problem and defines a set of AI models that are interpretable by the users.
arXiv Detail & Related papers (2022-06-07T08:22:30Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey [2.7086321720578623]
Black-box nature of deep neural networks challenges its use in mission critical applications.
XAI promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions.
arXiv Detail & Related papers (2020-06-16T02:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.