Natural Example-Based Explainability: a Survey
- URL: http://arxiv.org/abs/2309.03234v1
- Date: Tue, 5 Sep 2023 09:46:20 GMT
- Title: Natural Example-Based Explainability: a Survey
- Authors: Antonin Poch\'e, Lucas Hervier, Mohamed-Chafik Bakkay
- Abstract summary: This paper provides an overview of the state-of-the-art in natural example-based XAI.
It will explore the following family of methods: similar examples, counterfactual and semi-factual, influential instances, prototypes, and concepts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Explainable Artificial Intelligence (XAI) has become increasingly significant
for improving the interpretability and trustworthiness of machine learning
models. While saliency maps have stolen the show for the last few years in the
XAI field, their ability to reflect models' internal processes has been
questioned. Although less in the spotlight, example-based XAI methods have
continued to improve. It encompasses methods that use examples as explanations
for a machine learning model's predictions. This aligns with the psychological
mechanisms of human reasoning and makes example-based explanations natural and
intuitive for users to understand. Indeed, humans learn and reason by forming
mental representations of concepts based on examples.
This paper provides an overview of the state-of-the-art in natural
example-based XAI, describing the pros and cons of each approach. A "natural"
example simply means that it is directly drawn from the training data without
involving any generative process. The exclusion of methods that require
generating examples is justified by the need for plausibility which is in some
regards required to gain a user's trust. Consequently, this paper will explore
the following family of methods: similar examples, counterfactual and
semi-factual, influential instances, prototypes, and concepts. In particular,
it will compare their semantic definition, their cognitive impact, and added
values. We hope it will encourage and facilitate future work on natural
example-based XAI.
Related papers
- Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Characterizing the contribution of dependent features in XAI methods [6.990173577370281]
We propose a proxy that modifies the outcome of any XAI feature ranking method allowing to account for the dependency among the predictors.
The proposed approach has the advantage of being model-agnostic as well as simple to calculate the impact of each predictor in the model in presence of collinearity.
arXiv Detail & Related papers (2023-04-04T11:25:57Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - A Practical Tutorial on Explainable AI Techniques [5.671062637797752]
This tutorial is meant to be the go-to handbook for any audience with a computer science background.
It aims at getting intuitive insights of machine learning models, accompanied with straight, fast, and intuitive explanations out of the box.
arXiv Detail & Related papers (2021-11-13T17:47:31Z) - Mitigating belief projection in explainable artificial intelligence via
Bayesian Teaching [4.864819846886143]
Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents.
We propose explicitly modeling the human explainee via Bayesian Teaching, which evaluates explanations by how much they shift explainees' inferences toward a desired goal.
arXiv Detail & Related papers (2021-02-07T21:23:24Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Explanations of Black-Box Model Predictions by Contextual Importance and
Utility [1.7188280334580195]
We present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations easily understandable by experts as well as novice users.
This method explains the prediction results without transforming the model into an interpretable one.
We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation.
arXiv Detail & Related papers (2020-05-30T06:49:50Z) - Explainable Reinforcement Learning: A Survey [0.0]
Explainable Artificial Intelligence (XAI) has gained increased traction over the last few years.
XAI models exhibit one detrimential characteristic: a performance-transparency trade-off.
This survey attempts to address this gap by offering an overview of Explainable Reinforcement Learning (XRL) methods.
arXiv Detail & Related papers (2020-05-13T10:52:49Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.