Towards a Shapley Value Graph Framework for Medical peer-influence
- URL: http://arxiv.org/abs/2112.14624v1
- Date: Wed, 29 Dec 2021 16:24:50 GMT
- Title: Towards a Shapley Value Graph Framework for Medical peer-influence
- Authors: Jamie Duell, Monika Seisenberger, Gert Aarts, Shangming Zhou and Xiuyi
Fan
- Abstract summary: This paper introduces a new framework to look deeper into explanations using graph representation for feature-to-feature interactions.
It aims to improve the interpretability of black-box Machine Learning (ML) models and inform intervention.
- Score: 0.9449650062296824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: eXplainable Artificial Intelligence (XAI) is a sub-field of Artificial
Intelligence (AI) that is at the forefront of AI research. In XAI feature
attribution methods produce explanations in the form of feature importance. A
limitation of existing feature attribution methods is that there is a lack of
explanation towards the consequence of intervention. Although contribution
towards a certain prediction is highlighted, the influence between features and
the consequence of intervention is not addressed. The aim of this paper is to
introduce a new framework to look deeper into explanations using graph
representation for feature-to-feature interactions to improve the
interpretability of black-box Machine Learning (ML) models and inform
intervention.
Related papers
- Explainable Artificial Intelligence for Dependent Features: Additive Effects of Collinearity [0.0]
We propose an Additive Effects of Collinearity (AEC) as a novel XAI method to consider the collinearity issue.
The proposed method is implemented using simulated and real data to validate its efficiency comparing with the a state of arts XAI method.
arXiv Detail & Related papers (2024-10-30T07:00:30Z) - How Well Do Feature-Additive Explainers Explain Feature-Additive
Predictors? [12.993027779814478]
We ask the question: can popular feature-additive explainers (e.g., LIME, SHAP, SHAPR, MAPLE, and PDP) explain feature-additive predictors?
Herein, we evaluate such explainers on ground truth that is analytically derived from the additive structure of a model.
Our results suggest that all explainers eventually fail to correctly attribute the importance of features, especially when a decision-making process involves feature interactions.
arXiv Detail & Related papers (2023-10-27T21:16:28Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - A Novel Interaction-based Methodology Towards Explainable AI with Better
Understanding of Pneumonia Chest X-ray Images [0.0]
This paper proposes an interaction-based methodology -- Influence Score (I-score) -- to screen out the noisy and non-informative variables in the images.
We apply the proposed method on a real world application in Pneumonia Chest X-ray Image data set and produced state-of-the-art results.
arXiv Detail & Related papers (2021-04-19T23:02:43Z) - Transforming Feature Space to Interpret Machine Learning Models [91.62936410696409]
This contribution proposes a novel approach that interprets machine-learning models through the lens of feature space transformations.
It can be used to enhance unconditional as well as conditional post-hoc diagnostic tools.
A case study on remote-sensing landcover classification with 46 features is used to demonstrate the potential of the proposed approach.
arXiv Detail & Related papers (2021-04-09T10:48:11Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - How does this interaction affect me? Interpretable attribution for
feature interactions [19.979889568380464]
We propose an interaction attribution and detection framework called Archipelago.
Our experiments on standard annotation labels indicate our approach provides significantly more interpretable explanations than comparable methods.
We also provide accompanying visualizations of our approach that give new insights into deep neural networks.
arXiv Detail & Related papers (2020-06-19T05:14:24Z) - Explaining Black Box Predictions and Unveiling Data Artifacts through
Influence Functions [55.660255727031725]
Influence functions explain the decisions of a model by identifying influential training examples.
We conduct a comparison between influence functions and common word-saliency methods on representative tasks.
We develop a new measure based on influence functions that can reveal artifacts in training data.
arXiv Detail & Related papers (2020-05-14T00:45:23Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.