Assisting clinical practice with fuzzy probabilistic decision trees
- URL: http://arxiv.org/abs/2304.07788v2
- Date: Wed, 26 Apr 2023 23:28:15 GMT
- Title: Assisting clinical practice with fuzzy probabilistic decision trees
- Authors: Emma L. Ambags, Giulia Capitoli, Vincenzo L' Imperio, Michele
Provenzano, Marco S. Nobile, Pietro Li\`o
- Abstract summary: We propose FPT, a novel method that combines probabilistic trees and fuzzy logic to assist clinical practice.
We show that FPT and its predictions can assist clinical practice in an intuitive manner, with the use of a user-friendly interface specifically designed for this purpose.
- Score: 2.0999441362198907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The need for fully human-understandable models is increasingly being
recognised as a central theme in AI research. The acceptance of AI models to
assist in decision making in sensitive domains will grow when these models are
interpretable, and this trend towards interpretable models will be amplified by
upcoming regulations. One of the killer applications of interpretable AI is
medical practice, which can benefit from accurate decision support
methodologies that inherently generate trust. In this work, we propose FPT,
(MedFP), a novel method that combines probabilistic trees and fuzzy logic to
assist clinical practice. This approach is fully interpretable as it allows
clinicians to generate, control and verify the entire diagnosis procedure; one
of the methodology's strength is the capability to decrease the frequency of
misdiagnoses by providing an estimate of uncertainties and counterfactuals. Our
approach is applied as a proof-of-concept to two real medical scenarios:
classifying malignant thyroid nodules and predicting the risk of progression in
chronic kidney disease patients. Our results show that probabilistic fuzzy
decision trees can provide interpretable support to clinicians, furthermore,
introducing fuzzy variables into the probabilistic model brings significant
nuances that are lost when using the crisp thresholds set by traditional
probabilistic decision trees. We show that FPT and its predictions can assist
clinical practice in an intuitive manner, with the use of a user-friendly
interface specifically designed for this purpose. Moreover, we discuss the
interpretability of the FPT model.
Related papers
- Analyzing the Effect of $k$-Space Features in MRI Classification Models [0.0]
We have developed an explainable AI methodology tailored for medical imaging.
We employ a Convolutional Neural Network (CNN) that analyzes MRI scans across both image and frequency domains.
This approach not only enhances early training efficiency but also deepens our understanding of how additional features impact the model predictions.
arXiv Detail & Related papers (2024-09-20T15:43:26Z) - Bayesian Kolmogorov Arnold Networks (Bayesian_KANs): A Probabilistic Approach to Enhance Accuracy and Interpretability [1.90365714903665]
This study presents a novel framework called Bayesian Kolmogorov Arnold Networks (BKANs)
BKANs combines the expressive capacity of Kolmogorov Arnold Networks with Bayesian inference.
Our method provides useful insights into prediction confidence and decision boundaries and outperforms traditional deep learning models in terms of prediction accuracy.
arXiv Detail & Related papers (2024-08-05T10:38:34Z) - Decoding Decision Reasoning: A Counterfactual-Powered Model for Knowledge Discovery [6.1521675665532545]
In medical imaging, discerning the rationale behind an AI model's predictions is crucial for evaluating its reliability.
We propose an explainable model that is equipped with both decision reasoning and feature identification capabilities.
By implementing our method, we can efficiently identify and visualise class-specific features leveraged by the data-driven model.
arXiv Detail & Related papers (2024-05-23T19:00:38Z) - Unified Uncertainty Estimation for Cognitive Diagnosis Models [70.46998436898205]
We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
arXiv Detail & Related papers (2024-03-09T13:48:20Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Context-dependent Explainability and Contestability for Trustworthy
Medical Artificial Intelligence: Misclassification Identification of
Morbidity Recognition Models in Preterm Infants [0.0]
Explainable AI (XAI) aims to address this requirement by clarifying AI reasoning to support the end users.
We built our methodology on three main pillars: decomposing the feature set by leveraging clinical context latent space, assessing the clinical association of global explanations, and Latent Space Similarity (LSS) based local explanations.
arXiv Detail & Related papers (2022-12-17T07:59:09Z) - What Do You See in this Patient? Behavioral Testing of Clinical NLP
Models [69.09570726777817]
We introduce an extendable testing framework that evaluates the behavior of clinical outcome models regarding changes of the input.
We show that model behavior varies drastically even when fine-tuned on the same data and that allegedly best-performing models have not always learned the most medically plausible patterns.
arXiv Detail & Related papers (2021-11-30T15:52:04Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - (Un)fairness in Post-operative Complication Prediction Models [20.16366948502659]
We consider a real-life example of risk estimation before surgery and investigate the potential for bias or unfairness of a variety of algorithms.
Our approach creates transparent documentation of potential bias so that the users can apply the model carefully.
arXiv Detail & Related papers (2020-11-03T22:11:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.