An Empirical Comparison of Explainable Artificial Intelligence Methods
for Clinical Data: A Case Study on Traumatic Brain Injury
- URL: http://arxiv.org/abs/2208.06717v1
- Date: Sat, 13 Aug 2022 19:44:00 GMT
- Title: An Empirical Comparison of Explainable Artificial Intelligence Methods
for Clinical Data: A Case Study on Traumatic Brain Injury
- Authors: Amin Nayebi, Sindhu Tipirneni, Brandon Foreman, Chandan K. Reddy,
Vignesh Subbian
- Abstract summary: We implement two prediction models for short- and long-term outcomes of traumatic brain injury.
Six different interpretation techniques were used to describe both prediction models at the local and global levels.
The implemented methods were compared to one another in terms of several XAI characteristics such as understandability, fidelity, and stability.
- Score: 8.913544654492696
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A longstanding challenge surrounding deep learning algorithms is unpacking
and understanding how they make their decisions. Explainable Artificial
Intelligence (XAI) offers methods to provide explanations of internal functions
of algorithms and reasons behind their decisions in ways that are interpretable
and understandable to human users. . Numerous XAI approaches have been
developed thus far, and a comparative analysis of these strategies seems
necessary to discern their relevance to clinical prediction models. To this
end, we first implemented two prediction models for short- and long-term
outcomes of traumatic brain injury (TBI) utilizing structured tabular as well
as time-series physiologic data, respectively. Six different interpretation
techniques were used to describe both prediction models at the local and global
levels. We then performed a critical analysis of merits and drawbacks of each
strategy, highlighting the implications for researchers who are interested in
applying these methodologies. The implemented methods were compared to one
another in terms of several XAI characteristics such as understandability,
fidelity, and stability. Our findings show that SHAP is the most stable with
the highest fidelity but falls short of understandability. Anchors, on the
other hand, is the most understandable approach, but it is only applicable to
tabular data and not time series data.
Related papers
- Gradient based Feature Attribution in Explainable AI: A Technical Review [13.848675695545909]
Surge in black-box AI models has prompted the need to explain the internal mechanism and justify their reliability.
gradient based explanations can be directly adopted for neural network models.
We introduce both human and quantitative evaluations to measure algorithm performance.
arXiv Detail & Related papers (2024-03-15T15:49:31Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Topological Interpretability for Deep-Learning [0.30806551485143496]
Deep learning (DL) models cannot quantify the certainty of their predictions.
This work presents a method to infer prominent features in two DL classification models trained on clinical and non-clinical text.
arXiv Detail & Related papers (2023-05-15T13:38:13Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - A method for comparing multiple imputation techniques: a case study on
the U.S. National COVID Cohort Collaborative [1.259457977936316]
We numerically evaluate strategies for handling missing data in the context of statistical analysis.
Our approach could effectively highlight the most valid and performant missing-data handling strategy.
arXiv Detail & Related papers (2022-06-13T19:49:54Z) - MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning [63.50909998372667]
We propose MERIt, a MEta-path guided contrastive learning method for logical ReasonIng of text.
Two novel strategies serve as indispensable components of our method.
arXiv Detail & Related papers (2022-03-01T11:13:00Z) - A survey of Bayesian Network structure learning [8.411014222942168]
This paper provides a review of 61 algorithms proposed for learning BN structure from data.
The basic approach of each algorithm is described in consistent terms, and the similarities and differences between them highlighted.
Approaches for dealing with data noise in real-world datasets and incorporating expert knowledge into the learning process are also covered.
arXiv Detail & Related papers (2021-09-23T14:54:00Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory
to Learning Algorithms [91.3755431537592]
We analyze four broad meta-learning strategies which rely on plug-in estimation and pseudo-outcome regression.
We highlight how this theoretical reasoning can be used to guide principled algorithm design and translate our analyses into practice.
arXiv Detail & Related papers (2021-01-26T17:11:40Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.