What's meant by explainable model: A Scoping Review
- URL: http://arxiv.org/abs/2307.09673v3
- Date: Tue, 29 Aug 2023 13:30:12 GMT
- Title: What's meant by explainable model: A Scoping Review
- Authors: Mallika Mainali, Rosina O Weber
- Abstract summary: This paper investigates whether the term explainable model is adopted by authors under the assumption that incorporating a post-hoc XAI method suffices to characterize a model as explainable.
We found that 81% of the application papers that refer to their approaches as an explainable model do not conduct any form of evaluation on the XAI method they used.
- Score: 0.38252451346419336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We often see the term explainable in the titles of papers that describe
applications based on artificial intelligence (AI). However, the literature in
explainable artificial intelligence (XAI) indicates that explanations in XAI
are application- and domain-specific, hence requiring evaluation whenever they
are employed to explain a model that makes decisions for a specific application
problem. Additionally, the literature reveals that the performance of post-hoc
methods, particularly feature attribution methods, varies substantially hinting
that they do not represent a solution to AI explainability. Therefore, when
using XAI methods, the quality and suitability of their information outputs
should be evaluated within the specific application. For these reasons, we used
a scoping review methodology to investigate papers that apply AI models and
adopt methods to generate post-hoc explanations while referring to said models
as explainable. This paper investigates whether the term explainable model is
adopted by authors under the assumption that incorporating a post-hoc XAI
method suffices to characterize a model as explainable. To inspect this
problem, our review analyzes whether these papers conducted evaluations. We
found that 81% of the application papers that refer to their approaches as an
explainable model do not conduct any form of evaluation on the XAI method they
used.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Towards Explainable AI for Channel Estimation in Wireless Communications [1.0874597293913013]
The aim of the proposed XAI-CHEST scheme is to identify the relevant model inputs by inducing high noise on the irrelevant ones.
As a result, the behavior of the studied DL-based channel estimators can be further analyzed and evaluated.
arXiv Detail & Related papers (2023-07-03T11:51:00Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - An Experimental Investigation into the Evaluation of Explainability
Methods [60.54170260771932]
This work compares 14 different metrics when applied to nine state-of-the-art XAI methods and three dummy methods (e.g., random saliency maps) used as references.
Experimental results show which of these metrics produces highly correlated results, indicating potential redundancy.
arXiv Detail & Related papers (2023-05-25T08:07:07Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Provably Robust Model-Centric Explanations for Critical Decision-Making [14.367217955827002]
We show that data-centric methods may yield brittle explanations of limited practical utility.
The model-centric framework, however, can offer actionable insights into risks of using AI models in practice.
arXiv Detail & Related papers (2021-10-26T18:05:49Z) - Explanatory Pluralism in Explainable AI [0.0]
I chart a taxonomy of types of explanation and the associated XAI methods that can address them.
When we look to expose the inner mechanisms of AI models, we produce Diagnostic-explanations.
When we wish to form stable generalizations of our models, we produce Expectation-explanations.
Finally, when we want to justify the usage of a model, we produce Role-explanations.
arXiv Detail & Related papers (2021-06-26T09:02:06Z) - To trust or not to trust an explanation: using LEAF to evaluate local
linear XAI methods [0.0]
There is no consensus on how to quantitatively evaluate explanations in practice.
explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked.
Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations - with LIME and SHAP emerging as state-of-the-art methods.
We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label.
This highlights the need to have standard and unbiased evaluation procedures for
arXiv Detail & Related papers (2021-06-01T13:14:12Z) - Data Representing Ground-Truth Explanations to Evaluate XAI Methods [0.0]
Explainable artificial intelligence (XAI) methods are currently evaluated with approaches mostly originated in interpretable machine learning (IML) research.
We propose to represent explanations with canonical equations that can be used to evaluate the accuracy of XAI methods.
arXiv Detail & Related papers (2020-11-18T16:54:53Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.