A general approach for Explanations in terms of Middle Level Features
- URL: http://arxiv.org/abs/2106.05037v1
- Date: Wed, 9 Jun 2021 12:51:40 GMT
- Title: A general approach for Explanations in terms of Middle Level Features
- Authors: Andrea Apicella, Francesco Isgr\`o, Roberto Prevete
- Abstract summary: We propose an XAI general approach which is able to construct explanations in terms of input features.
Middle-Level input Features (MLFs) represent more salient and understandable input properties for a user.
We experimentally tested our approach on two different datasets and using three different types of MLFs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, it is growing interest to make Machine Learning (ML) systems more
understandable and trusting to general users. Thus, generating explanations for
ML system behaviours that are understandable to human beings is a central
scientific and technological issue addressed by the rapidly growing research
area of eXplainable Artificial Intelligence (XAI). Recently, it is becoming
more and more evident that new directions to create better explanations should
take into account what a good explanation is to a human user, and consequently,
develop XAI solutions able to provide user-centred explanations. This paper
suggests taking advantage of developing an XAI general approach that allows
producing explanations for an ML system behaviour in terms of different and
user-selected input features, i.e., explanations composed of input properties
that the human user can select according to his background knowledge and goals.
To this end, we propose an XAI general approach which is able: 1) to construct
explanations in terms of input features that represent more salient and
understandable input properties for a user, which we call here Middle-Level
input Features (MLFs), 2) to be applied to different types of MLFs. We
experimentally tested our approach on two different datasets and using three
different types of MLFs. The results seem encouraging.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Selective Explanations: Leveraging Human Input to Align Explainable AI [40.33998268146951]
We propose a general framework for generating selective explanations by leveraging human input on a small sample.
As a showcase, we use a decision-support task to explore selective explanations based on what the decision-maker would consider relevant to the decision task.
Our experiments demonstrate the promise of selective explanations in reducing over-reliance on AI.
arXiv Detail & Related papers (2023-01-23T19:00:02Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - "Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI [2.5899040911480173]
We explore the features of explanations and how to use those features in evaluating their utility.
We focus on the requirements for explanations defined by their functional role, the knowledge states of users who are trying to understand them, and the availability of the information needed to generate them.
arXiv Detail & Related papers (2022-06-27T21:42:53Z) - A general approach to compute the relevance of middle-level input
features [0.0]
Middle-level explanations have been introduced for alleviating some deficiencies of low-level explanations.
A general approach to correctly evaluate the elements of middle-level explanations with respect ML model responses has never been proposed in the literature.
arXiv Detail & Related papers (2020-10-16T21:46:50Z) - The Role of Individual User Differences in Interpretable and Explainable
Machine Learning Systems [0.3169089186688223]
We study how individual skills and personality traits predict interpretability, explainability, and knowledge discovery from machine learning generated model output.
Our work relies on Fuzzy Trace Theory, a leading theory of how humans process numerical stimuli.
arXiv Detail & Related papers (2020-09-14T18:15:00Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.