Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey
- URL: http://arxiv.org/abs/2006.11371v2
- Date: Tue, 23 Jun 2020 01:48:56 GMT
- Title: Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey
- Authors: Arun Das and Paul Rad
- Abstract summary: Black-box nature of deep neural networks challenges its use in mission critical applications.
XAI promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions.
- Score: 2.7086321720578623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, deep neural networks are widely used in mission critical systems
such as healthcare, self-driving vehicles, and military which have direct
impact on human lives. However, the black-box nature of deep neural networks
challenges its use in mission critical applications, raising ethical and
judicial concerns inducing lack of trust. Explainable Artificial Intelligence
(XAI) is a field of Artificial Intelligence (AI) that promotes a set of tools,
techniques, and algorithms that can generate high-quality interpretable,
intuitive, human-understandable explanations of AI decisions. In addition to
providing a holistic view of the current XAI landscape in deep learning, this
paper provides mathematical summaries of seminal work. We start by proposing a
taxonomy and categorizing the XAI techniques based on their scope of
explanations, methodology behind the algorithms, and explanation level or usage
which helps build trustworthy, interpretable, and self-explanatory deep
learning models. We then describe the main principles used in XAI research and
present the historical timeline for landmark studies in XAI from 2007 to 2020.
After explaining each category of algorithms and approaches in detail, we then
evaluate the explanation maps generated by eight XAI algorithms on image data,
discuss the limitations of this approach, and provide potential future
directions to improve XAI evaluation.
Related papers
- Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction [5.417632175667161]
Explainable Artificial Intelligence (XAI) addresses challenges by providing explanations for how these models make decisions and predictions.
Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques.
This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas.
arXiv Detail & Related papers (2024-08-30T21:42:17Z) - Evolutionary Computation and Explainable AI: A Roadmap to Understandable Intelligent Systems [37.02462866600066]
Evolutionary computation (EC) offers significant potential to contribute to explainable AI (XAI)
This paper provides an introduction to XAI and reviews current techniques for explaining machine learning models.
We then explore how EC can be leveraged in XAI and examine existing XAI approaches that incorporate EC techniques.
arXiv Detail & Related papers (2024-06-12T02:06:24Z) - Gradient based Feature Attribution in Explainable AI: A Technical Review [13.848675695545909]
Surge in black-box AI models has prompted the need to explain the internal mechanism and justify their reliability.
gradient based explanations can be directly adopted for neural network models.
We introduce both human and quantitative evaluations to measure algorithm performance.
arXiv Detail & Related papers (2024-03-15T15:49:31Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Human-Centered Explainable AI (XAI): From Algorithms to User Experiences [29.10123472973571]
explainable AI (XAI) has produced a vast collection of algorithms in recent years.
The field is starting to embrace inter-disciplinary perspectives and human-centered approaches.
arXiv Detail & Related papers (2021-10-20T21:33:46Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Explainable Artificial Intelligence (XAI): An Engineering Perspective [0.0]
XAI is a set of techniques and methods to convert the so-called black-box AI algorithms to white-box algorithms.
We discuss the stakeholders in XAI and describe the mathematical contours of XAI from engineering perspective.
This work is an exploratory study to identify new avenues of research in the field of XAI.
arXiv Detail & Related papers (2021-01-10T19:49:12Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.