Concept-based Explainable Artificial Intelligence: A Survey
- URL: http://arxiv.org/abs/2312.12936v1
- Date: Wed, 20 Dec 2023 11:27:21 GMT
- Title: Concept-based Explainable Artificial Intelligence: A Survey
- Authors: Eleonora Poeta, Gabriele Ciravegna, Eliana Pastor, Tania Cerquitelli,
Elena Baralis
- Abstract summary: Using raw features to provide explanations has been disputed in several works lately.
A unified categorization and precise field definition are still missing.
This paper fills the gap by offering a thorough review of C-XAI approaches.
- Score: 16.580100294489508
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The field of explainable artificial intelligence emerged in response to the
growing need for more transparent and reliable models. However, using raw
features to provide explanations has been disputed in several works lately,
advocating for more user-understandable explanations. To address this issue, a
wide range of papers proposing Concept-based eXplainable Artificial
Intelligence (C-XAI) methods have arisen in recent years. Nevertheless, a
unified categorization and precise field definition are still missing. This
paper fills the gap by offering a thorough review of C-XAI approaches. We
define and identify different concepts and explanation types. We provide a
taxonomy identifying nine categories and propose guidelines for selecting a
suitable category based on the development context. Additionally, we report
common evaluation strategies including metrics, human evaluations and dataset
employed, aiming to assist the development of future methods. We believe this
survey will serve researchers, practitioners, and domain experts in
comprehending and advancing this innovative field.
Related papers
- User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study [5.775094401949666]
This study is located in the Human-Centered Artificial Intelligence (HCAI)
It focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms.
arXiv Detail & Related papers (2024-10-21T12:32:39Z) - A survey on Concept-based Approaches For Model Improvement [2.1516043775965565]
Concepts are known to be the thinking ground of humans.
We provide a systematic review and taxonomy of various concept representations and their discovery algorithms in Deep Neural Networks (DNNs)
We also provide details on concept-based model improvement literature marking the first comprehensive survey of these methods.
arXiv Detail & Related papers (2024-03-21T17:09:20Z) - Gradient based Feature Attribution in Explainable AI: A Technical Review [13.848675695545909]
Surge in black-box AI models has prompted the need to explain the internal mechanism and justify their reliability.
gradient based explanations can be directly adopted for neural network models.
We introduce both human and quantitative evaluations to measure algorithm performance.
arXiv Detail & Related papers (2024-03-15T15:49:31Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - A Taxonomy of Decentralized Identifier Methods for Practitioners [50.76687001060655]
A core part of the new identity management paradigm of Self-Sovereign Identity (SSI) is the W3C Decentralized Identifiers (DIDs) standard.
We propose a taxonomy of DID methods with the goal to empower practitioners to make informed decisions when selecting DID methods.
arXiv Detail & Related papers (2023-10-18T13:01:40Z) - Helpful, Misleading or Confusing: How Humans Perceive Fundamental
Building Blocks of Artificial Intelligence Explanations [11.667611038005552]
We take a step back from sophisticated predictive algorithms and look into explainability of simple decision-making models.
We aim to assess how people perceive comprehensibility of their different representations.
This allows us to capture how diverse stakeholders judge intelligibility of fundamental concepts that more elaborate artificial intelligence explanations are built from.
arXiv Detail & Related papers (2023-03-02T03:15:35Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - From Attribution Maps to Human-Understandable Explanations through
Concept Relevance Propagation [16.783836191022445]
The field of eXplainable Artificial Intelligence (XAI) aims to bring transparency to today's powerful but opaque deep learning models.
While local XAI methods explain individual predictions in form of attribution maps, global explanation techniques visualize what concepts a model has generally learned to encode.
arXiv Detail & Related papers (2022-06-07T12:05:58Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.