Towards the Linear Algebra Based Taxonomy of XAI Explanations
- URL: http://arxiv.org/abs/2301.13138v1
- Date: Mon, 30 Jan 2023 18:21:27 GMT
- Title: Towards the Linear Algebra Based Taxonomy of XAI Explanations
- Authors: Sven Nomm
- Abstract summary: Methods of Explainable Artificial Intelligence (XAI) were developed to answer the question why a certain prediction or estimation was made.
XAI proposed in the literature mainly concentrate their attention on distinguishing explanations with respect to involving the human agent.
This paper proposes a simple linear algebra-based taxonomy for local explanations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes an alternative approach to the basic taxonomy of
explanations produced by explainable artificial intelligence techniques.
Methods of Explainable Artificial Intelligence (XAI) were developed to answer
the question why a certain prediction or estimation was made, preferably in
terms easy to understand by the human agent. XAI taxonomies proposed in the
literature mainly concentrate their attention on distinguishing explanations
with respect to involving the human agent, which makes it complicated to
provide a more mathematical approach to distinguish and compare different
explanations. This paper narrows its attention to the cases where the data set
of interest belongs to $\mathbb{R} ^n$ and proposes a simple linear
algebra-based taxonomy for local explanations.
Related papers
- Selective Explanations [14.312717332216073]
A machine learning model is trained to predict feature attribution scores with only one inference.
Despite their efficiency, amortized explainers can produce inaccurate predictions and misleading explanations.
We propose selective explanations, a novel feature attribution method that detects when amortized explainers generate low-quality explanations.
arXiv Detail & Related papers (2024-05-29T23:08:31Z) - Incremental XAI: Memorable Understanding of AI with Incremental Explanations [13.460427339680168]
We propose to provide more detailed explanations by leveraging the human cognitive capacity to accumulate knowledge by incrementally receiving more details.
We introduce Incremental XAI to automatically partition explanations for general and atypical instances.
Memorability is improved by reusing base factors and reducing the number of factors shown in atypical cases.
arXiv Detail & Related papers (2024-04-10T04:38:17Z) - Interpretability is not Explainability: New Quantitative XAI Approach
with a focus on Recommender Systems in Education [0.0]
We propose a novel taxonomy that provides a clear and unambiguous understanding of the key concepts and relationships in XAI.
Our approach is rooted in a systematic analysis of existing definitions and frameworks.
This comprehensive taxonomy aims to establish a shared vocabulary for future research.
arXiv Detail & Related papers (2023-09-18T11:59:02Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Probing Taxonomic and Thematic Embeddings for Taxonomic Information [2.9874726192215157]
Modelling taxonomic and thematic relatedness is important for building AI with comprehensive natural language understanding.
We design a new hypernym-hyponym probing task and perform a comparative probing study of taxonomic and thematic SGNS and GloVe embeddings.
Experiments indicate that both types of embeddings encode some taxonomic information, but the amount, as well as the geometric properties of the encodings, are independently related to both the encoder architecture and the embedding training data.
arXiv Detail & Related papers (2023-01-25T15:59:26Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
Human Trust in Image Recognition Models [84.32751938563426]
We propose a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN)
In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process.
Our framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
arXiv Detail & Related papers (2021-09-03T09:46:20Z) - Rational Shapley Values [0.0]
Most popular tools for post-hoc explainable artificial intelligence (XAI) are either insensitive to context or difficult to summarize.
I introduce emphrational Shapley values, a novel XAI method that synthesizes and extends these seemingly incompatible approaches.
I leverage tools from decision theory and causal modeling to formalize and implement a pragmatic approach that resolves a number of known challenges in XAI.
arXiv Detail & Related papers (2021-06-18T15:45:21Z) - Discrete Reasoning Templates for Natural Language Understanding [79.07883990966077]
We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
arXiv Detail & Related papers (2021-04-05T18:56:56Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.