Interpretability of machine learning based prediction models in
healthcare
- URL: http://arxiv.org/abs/2002.08596v2
- Date: Fri, 14 Aug 2020 06:36:06 GMT
- Title: Interpretability of machine learning based prediction models in
healthcare
- Authors: Gregor Stiglic, Primoz Kocbek, Nino Fijacko, Marinka Zitnik, Katrien
Verbert, Leona Cilar
- Abstract summary: We give an overview of interpretability approaches and provide examples of practical interpretability of machine learning in different areas of healthcare.
We highlight the importance of developing algorithmic solutions that can enable machine-learning driven decision making in high-stakes healthcare problems.
- Score: 8.799886951659627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a need of ensuring machine learning models that are interpretable.
Higher interpretability of the model means easier comprehension and explanation
of future predictions for end-users. Further, interpretable machine learning
models allow healthcare experts to make reasonable and data-driven decisions to
provide personalized decisions that can ultimately lead to higher quality of
service in healthcare. Generally, we can classify interpretability approaches
in two groups where the first focuses on personalized interpretation (local
interpretability) while the second summarizes prediction models on a population
level (global interpretability). Alternatively, we can group interpretability
methods into model-specific techniques, which are designed to interpret
predictions generated by a specific model, such as a neural network, and
model-agnostic approaches, which provide easy-to-understand explanations of
predictions made by any machine learning model. Here, we give an overview of
interpretability approaches and provide examples of practical interpretability
of machine learning in different areas of healthcare, including prediction of
health-related outcomes, optimizing treatments or improving the efficiency of
screening for specific conditions. Further, we outline future directions for
interpretable machine learning and highlight the importance of developing
algorithmic solutions that can enable machine-learning driven decision making
in high-stakes healthcare problems.
Related papers
- Selecting Interpretability Techniques for Healthcare Machine Learning models [69.65384453064829]
In healthcare there is a pursuit for employing interpretable algorithms to assist healthcare professionals in several decision scenarios.
We overview a selection of eight algorithms, both post-hoc and model-based, that can be used for such purposes.
arXiv Detail & Related papers (2024-06-14T17:49:04Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Predictability and Comprehensibility in Post-Hoc XAI Methods: A
User-Centered Analysis [6.606409729669314]
Post-hoc explainability methods aim to clarify predictions of black-box machine learning models.
We conduct a user study to evaluate comprehensibility and predictability in two widely used tools: LIME and SHAP.
We find that the comprehensibility of SHAP is significantly reduced when explanations are provided for samples near a model's decision boundary.
arXiv Detail & Related papers (2023-09-21T11:54:20Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Safe AI for health and beyond -- Monitoring to transform a health
service [51.8524501805308]
We will assess the infrastructure required to monitor the outputs of a machine learning algorithm.
We will present two scenarios with examples of monitoring and updates of models.
arXiv Detail & Related papers (2023-03-02T17:27:45Z) - Clinical outcome prediction under hypothetical interventions -- a
representation learning framework for counterfactual reasoning [31.97813934144506]
We introduce a new representation learning framework, which considers the provision of counterfactual explanations as an embedded property of the risk model.
Our results suggest that our proposed framework has the potential to help researchers and clinicians improve personalised care.
arXiv Detail & Related papers (2022-05-15T09:41:16Z) - Explainable AI Enabled Inspection of Business Process Prediction Models [2.5229940062544496]
We present an approach that allows us to use model explanations to investigate certain reasoning applied by machine learned predictions.
A novel contribution of our approach is the proposal of model inspection that leverages both the explanations generated by interpretable machine learning mechanisms and the contextual or domain knowledge extracted from event logs that record historical process execution.
arXiv Detail & Related papers (2021-07-16T06:51:18Z) - Faithful and Plausible Explanations of Medical Code Predictions [12.156363504753244]
Explanations must balance faithfulness to the model's decision-making with their plausibility to a domain expert.
We train a proxy model that mimics the behavior of the trained model and provides fine-grained control over these trade-offs.
We evaluate our approach on the task of assigning ICD codes to clinical notes to demonstrate that explanations from the proxy model are faithful and replicate the trained model behavior.
arXiv Detail & Related papers (2021-04-16T05:13:36Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.