Personalized and Reliable Decision Sets: Enhancing Interpretability in
Clinical Decision Support Systems
- URL: http://arxiv.org/abs/2107.07483v1
- Date: Thu, 15 Jul 2021 17:36:24 GMT
- Title: Personalized and Reliable Decision Sets: Enhancing Interpretability in
Clinical Decision Support Systems
- Authors: Francisco Valente, Sim\~ao Paredes, Jorge Henriques
- Abstract summary: The system combines a decision set of rules with a machine learning scheme to offer global and local interpretability.
The reliability analysis of individual predictions is also addressed, contributing to further personalized interpretability.
- Score: 0.08594140167290096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we present a novel clinical decision support system and
discuss its interpretability-related properties. It combines a decision set of
rules with a machine learning scheme to offer global and local
interpretability. More specifically, machine learning is used to predict the
likelihood of each of those rules to be correct for a particular patient, which
may also contribute to better predictive performances. Moreover, the
reliability analysis of individual predictions is also addressed, contributing
to further personalized interpretability. The combination of these several
elements may be crucial to obtain the clinical stakeholders' trust, leading to
a better assessment of patients' conditions and improvement of the physicians'
decision-making.
Related papers
- A machine learning framework for interpretable predictions in patient pathways: The case of predicting ICU admission for patients with symptoms of sepsis [3.5280004326441365]
PatWay-Net is an ML framework designed for interpretable predictions of admission to the intensive care unit for patients with sepsis.
We propose a novel type of recurrent neural network and combine it with multi-layer perceptrons to process the patient pathways.
We demonstrate its utility through a comprehensive dashboard that visualizes patient health trajectories, predictive outcomes, and associated risks.
arXiv Detail & Related papers (2024-05-21T20:31:42Z) - Explainable AI for clinical risk prediction: a survey of concepts,
methods, and modalities [2.9404725327650767]
Review of progress in developing explainable models for clinical risk prediction.
emphasizes the need for external validation and the combination of diverse interpretability methods.
End-to-end approach to explainability in clinical risk prediction is essential for success.
arXiv Detail & Related papers (2023-08-16T14:51:51Z) - Assisting clinical practice with fuzzy probabilistic decision trees [2.0999441362198907]
We propose FPT, a novel method that combines probabilistic trees and fuzzy logic to assist clinical practice.
We show that FPT and its predictions can assist clinical practice in an intuitive manner, with the use of a user-friendly interface specifically designed for this purpose.
arXiv Detail & Related papers (2023-04-16T14:05:16Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - This Patient Looks Like That Patient: Prototypical Networks for
Interpretable Diagnosis Prediction from Clinical Text [56.32427751440426]
In clinical practice such models must not only be accurate, but provide doctors with interpretable and helpful results.
We introduce ProtoPatient, a novel method based on prototypical networks and label-wise attention.
We evaluate the model on two publicly available clinical datasets and show that it outperforms existing baselines.
arXiv Detail & Related papers (2022-10-16T10:12:07Z) - POETREE: Interpretable Policy Learning with Adaptive Decision Trees [78.6363825307044]
POETREE is a novel framework for interpretable policy learning.
It builds probabilistic tree policies determining physician actions based on patients' observations and medical history.
It outperforms the state-of-the-art on real and synthetic medical datasets.
arXiv Detail & Related papers (2022-03-15T16:50:52Z) - COVID-Net Clinical ICU: Enhanced Prediction of ICU Admission for
COVID-19 Patients via Explainability and Trust Quantification [71.80459780697956]
We introduce COVID-Net Clinical ICU, a neural network for ICU admission prediction based on patient clinical data.
The proposed COVID-Net Clinical ICU was built using a clinical dataset from Hospital Sirio-Libanes comprising of 1,925 COVID-19 patients.
We conducted system-level insight discovery using a quantitative explainability strategy to study the decision-making impact of different clinical features.
arXiv Detail & Related papers (2021-09-14T14:16:32Z) - Improving the compromise between accuracy, interpretability and
personalization of rule-based machine learning in medical problems [0.08594140167290096]
We introduce a new component to predict if a given rule will be correct or not for a particular patient, which introduces personalization into the procedure.
The validation results using three public clinical datasets show that it also allows to increase the predictive performance of the selected set of rules.
arXiv Detail & Related papers (2021-06-15T01:19:04Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - Opportunities of a Machine Learning-based Decision Support System for
Stroke Rehabilitation Assessment [64.52563354823711]
Rehabilitation assessment is critical to determine an adequate intervention for a patient.
Current practices of assessment mainly rely on therapist's experience, and assessment is infrequently executed due to the limited availability of a therapist.
We developed an intelligent decision support system that can identify salient features of assessment using reinforcement learning.
arXiv Detail & Related papers (2020-02-27T17:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.