Explanation of Machine Learning Models Using Shapley Additive
Explanation and Application for Real Data in Hospital
- URL: http://arxiv.org/abs/2112.11071v1
- Date: Tue, 21 Dec 2021 10:08:31 GMT
- Title: Explanation of Machine Learning Models Using Shapley Additive
Explanation and Application for Real Data in Hospital
- Authors: Yasunobu Nohara, Koutarou Matsumoto, Hidehisa Soejima and Naoki
Nakashima
- Abstract summary: We propose two novel techniques for better interpretability of machine learning models.
We show how the A/G ratio works as an important prognostic factor for cerebral infarction using our hospital data and proposed techniques.
- Score: 0.11470070927586014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When using machine learning techniques in decision-making processes, the
interpretability of the models is important. In the present paper, we adopted
the Shapley additive explanation (SHAP), which is based on fair profit
allocation among many stakeholders depending on their contribution, for
interpreting a gradient-boosting decision tree model using hospital data. For
better interpretability, we propose two novel techniques as follows: (1) a new
metric of feature importance using SHAP and (2) a technique termed feature
packing, which packs multiple similar features into one grouped feature to
allow an easier understanding of the model without reconstruction of the model.
We then compared the explanation results between the SHAP framework and
existing methods. In addition, we showed how the A/G ratio works as an
important prognostic factor for cerebral infarction using our hospital data and
proposed techniques.
Related papers
- Selecting Interpretability Techniques for Healthcare Machine Learning models [69.65384453064829]
In healthcare there is a pursuit for employing interpretable algorithms to assist healthcare professionals in several decision scenarios.
We overview a selection of eight algorithms, both post-hoc and model-based, that can be used for such purposes.
arXiv Detail & Related papers (2024-06-14T17:49:04Z) - Evaluating Explanatory Capabilities of Machine Learning Models in Medical Diagnostics: A Human-in-the-Loop Approach [0.0]
We use Human-in-the-Loop related techniques and medical guidelines as a source of domain knowledge to establish the importance of the different features that are relevant to establish a pancreatic cancer treatment.
We propose the use of similarity measures such as the weighted Jaccard Similarity coefficient to facilitate interpretation of explanatory results.
arXiv Detail & Related papers (2024-03-28T20:11:34Z) - A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME [4.328967621024592]
We propose a framework for interpretation of two widely used XAI methods.
We discuss their outcomes in terms of model-dependency and in the presence of collinearity.
The results indicate that SHAP and LIME are highly affected by the adopted ML model and feature collinearity, raising a note of caution on their usage and interpretation.
arXiv Detail & Related papers (2023-05-03T10:04:46Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Multivariate Data Explanation by Jumping Emerging Patterns Visualization [78.6363825307044]
We present VAX (multiVariate dAta eXplanation), a new VA method to support the identification and visual interpretation of patterns in multivariate data sets.
Unlike the existing similar approaches, VAX uses the concept of Jumping Emerging Patterns to identify and aggregate several diversified patterns, producing explanations through logic combinations of data variables.
arXiv Detail & Related papers (2021-06-21T13:49:44Z) - Explaining a Series of Models by Propagating Local Feature Attributions [9.66840768820136]
Pipelines involving several machine learning models improve performance in many domains but are difficult to understand.
We introduce a framework to propagate local feature attributions through complex pipelines of models based on a connection to the Shapley value.
Our framework enables us to draw higher-level conclusions based on groups of gene expression features for Alzheimer's and breast cancer histologic grade prediction.
arXiv Detail & Related papers (2021-04-30T22:20:58Z) - Transforming Feature Space to Interpret Machine Learning Models [91.62936410696409]
This contribution proposes a novel approach that interprets machine-learning models through the lens of feature space transformations.
It can be used to enhance unconditional as well as conditional post-hoc diagnostic tools.
A case study on remote-sensing landcover classification with 46 features is used to demonstrate the potential of the proposed approach.
arXiv Detail & Related papers (2021-04-09T10:48:11Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - A Semiparametric Approach to Interpretable Machine Learning [9.87381939016363]
Black box models in machine learning have demonstrated excellent predictive performance in complex problems and high-dimensional settings.
Their lack of transparency and interpretability restrict the applicability of such models in critical decision-making processes.
We propose a novel approach to trading off interpretability and performance in prediction models using ideas from semiparametric statistics.
arXiv Detail & Related papers (2020-06-08T16:38:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.