The Need for Standardized Explainability
- URL: http://arxiv.org/abs/2010.11273v2
- Date: Fri, 23 Oct 2020 02:19:36 GMT
- Title: The Need for Standardized Explainability
- Authors: Othman Benchekroun, Adel Rahimi, Qini Zhang, Tetiana Kodliuk
- Abstract summary: This paper offers a perspective on the current state of the area of explainability, and to provide novel definitions for Explainability and Interpretability.
We provide an overview of the literature on explainability, and of the existing methods that are already implemented.
We offer a tentative taxonomy of the different explainability methods, opening the door to future research.
- Score: 0.4588028371034407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable AI (XAI) is paramount in industry-grade AI; however existing
methods fail to address this necessity, in part due to a lack of
standardisation of explainability methods. The purpose of this paper is to
offer a perspective on the current state of the area of explainability, and to
provide novel definitions for Explainability and Interpretability to begin
standardising this area of research. To do so, we provide an overview of the
literature on explainability, and of the existing methods that are already
implemented. Finally, we offer a tentative taxonomy of the different
explainability methods, opening the door to future research.
Related papers
- Statistics and explainability: a fruitful alliance [0.0]
We propose standard statistical tools as a solution to commonly highlighted problems in the explainability literature.
We argue that uncertainty quantification is essential for providing robust and trustworthy explanations.
arXiv Detail & Related papers (2024-04-30T07:04:23Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Interpretability is not Explainability: New Quantitative XAI Approach
with a focus on Recommender Systems in Education [0.0]
We propose a novel taxonomy that provides a clear and unambiguous understanding of the key concepts and relationships in XAI.
Our approach is rooted in a systematic analysis of existing definitions and frameworks.
This comprehensive taxonomy aims to establish a shared vocabulary for future research.
arXiv Detail & Related papers (2023-09-18T11:59:02Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Towards Faithful Model Explanation in NLP: A Survey [48.690624266879155]
End-to-end neural Natural Language Processing (NLP) models are notoriously difficult to understand.
One desideratum of model explanation is faithfulness, i.e. an explanation should accurately represent the reasoning process behind the model's prediction.
We review over 110 model explanation methods in NLP through the lens of faithfulness.
arXiv Detail & Related papers (2022-09-22T21:40:51Z) - A general-purpose method for applying Explainable AI for Anomaly
Detection [6.09170287691728]
The need for explainable AI (XAI) is well established but relatively little has been published outside of the supervised learning paradigm.
This paper focuses on a principled approach to applying explainability and interpretability to the task of unsupervised anomaly detection.
arXiv Detail & Related papers (2022-07-23T17:56:01Z) - Do Natural Language Explanations Represent Valid Logical Arguments?
Verifying Entailment in Explainable NLI Gold Standards [0.0]
An emerging line of research in Explainable NLP is the creation of datasets enriched with human-annotated explanations and rationales.
While human-annotated explanations are used as ground-truth for the inference, there is a lack of systematic assessment of their consistency and rigour.
We propose a systematic annotation methodology, named Explanation Entailment Verification (EEV), to quantify the logical validity of human-annotated explanations.
arXiv Detail & Related papers (2021-05-05T10:59:26Z) - On the Faithfulness Measurements for Model Interpretations [100.2730234575114]
Post-hoc interpretations aim to uncover how natural language processing (NLP) models make predictions.
To tackle these issues, we start with three criteria: the removal-based criterion, the sensitivity of interpretations, and the stability of interpretations.
Motivated by the desideratum of these faithfulness notions, we introduce a new class of interpretation methods that adopt techniques from the adversarial domain.
arXiv Detail & Related papers (2021-04-18T09:19:44Z) - Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation [63.18666008322476]
Machine learning methods are being increasingly applied in sensitive societal contexts.
The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods.
And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain.
arXiv Detail & Related papers (2021-04-09T01:54:58Z) - A Survey of the State of Explainable AI for Natural Language Processing [16.660110121500125]
This survey presents an overview of the current state of Explainable AI (XAI)
We discuss the main categorization of explanations, as well as the various ways explanations can be arrived at and visualized.
We detail the operations and explainability techniques currently available for generating explanations for NLP model predictions, to serve as a resource for model developers in the community.
arXiv Detail & Related papers (2020-10-01T22:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.