Evaluating Explanatory Capabilities of Machine Learning Models in Medical Diagnostics: A Human-in-the-Loop Approach
- URL: http://arxiv.org/abs/2403.19820v1
- Date: Thu, 28 Mar 2024 20:11:34 GMT
- Title: Evaluating Explanatory Capabilities of Machine Learning Models in Medical Diagnostics: A Human-in-the-Loop Approach
- Authors: José Bobes-Bascarán, Eduardo Mosqueira-Rey, Ángel Fernández-Leal, Elena Hernández-Pereira, David Alonso-Ríos, Vicente Moret-Bonillo, Israel Figueirido-Arnoso, Yolanda Vidal-Ínsua,
- Abstract summary: We use Human-in-the-Loop related techniques and medical guidelines as a source of domain knowledge to establish the importance of the different features that are relevant to establish a pancreatic cancer treatment.
We propose the use of similarity measures such as the weighted Jaccard Similarity coefficient to facilitate interpretation of explanatory results.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a comprehensive study on the evaluation of explanatory capabilities of machine learning models, with a focus on Decision Trees, Random Forest and XGBoost models using a pancreatic cancer dataset. We use Human-in-the-Loop related techniques and medical guidelines as a source of domain knowledge to establish the importance of the different features that are relevant to establish a pancreatic cancer treatment. These features are not only used as a dimensionality reduction approach for the machine learning models, but also as way to evaluate the explainability capabilities of the different models using agnostic and non-agnostic explainability techniques. To facilitate interpretation of explanatory results, we propose the use of similarity measures such as the Weighted Jaccard Similarity coefficient. The goal is to not only select the best performing model but also the one that can best explain its conclusions and aligns with human domain knowledge.
Related papers
- Selecting Interpretability Techniques for Healthcare Machine Learning models [69.65384453064829]
In healthcare there is a pursuit for employing interpretable algorithms to assist healthcare professionals in several decision scenarios.
We overview a selection of eight algorithms, both post-hoc and model-based, that can be used for such purposes.
arXiv Detail & Related papers (2024-06-14T17:49:04Z) - Concept-based Explainable Malignancy Scoring on Pulmonary Nodules in CT Images [2.2120851074630177]
An interpretable model based on applying the generalized additive models and the concept-based learning is proposed.
The model detects a set of clinically significant attributes in addition to the final regression score and learns the association between the lung nodule attributes and a final diagnosis decision.
arXiv Detail & Related papers (2024-05-24T13:36:44Z) - Decoding Decision Reasoning: A Counterfactual-Powered Model for Knowledge Discovery [6.1521675665532545]
In medical imaging, discerning the rationale behind an AI model's predictions is crucial for evaluating its reliability.
We propose an explainable model that is equipped with both decision reasoning and feature identification capabilities.
By implementing our method, we can efficiently identify and visualise class-specific features leveraged by the data-driven model.
arXiv Detail & Related papers (2024-05-23T19:00:38Z) - Unified Uncertainty Estimation for Cognitive Diagnosis Models [70.46998436898205]
We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
arXiv Detail & Related papers (2024-03-09T13:48:20Z) - Interpretable Medical Image Classification using Prototype Learning and
Privileged Information [0.0]
Interpretability is often an essential requirement in medical imaging.
In this work, we investigate whether additional information available during the training process can be used to create an understandable and powerful model.
We propose an innovative solution called Proto-Caps that leverages the benefits of capsule networks, prototype learning and the use of privileged information.
arXiv Detail & Related papers (2023-10-24T11:28:59Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - Explanation of Machine Learning Models Using Shapley Additive
Explanation and Application for Real Data in Hospital [0.11470070927586014]
We propose two novel techniques for better interpretability of machine learning models.
We show how the A/G ratio works as an important prognostic factor for cerebral infarction using our hospital data and proposed techniques.
arXiv Detail & Related papers (2021-12-21T10:08:31Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Statistical Exploration of Relationships Between Routine and Agnostic
Features Towards Interpretable Risk Characterization [0.0]
How do we interpret the prognostic model for clinical implementation?
How can we identify potential information structures within sets of radiomic features?
And how can we recombine or exploit potential relationships between features towards improved interpretability?
arXiv Detail & Related papers (2020-01-28T14:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.