Rationale production to support clinical decision-making
- URL: http://arxiv.org/abs/2111.07611v1
- Date: Mon, 15 Nov 2021 09:02:10 GMT
- Title: Rationale production to support clinical decision-making
- Authors: Niall Taylor, Lei Sha, Dan W Joyce, Thomas Lukasiewicz, Alejo
Nevado-Holgado, Andrey Kormilitzin
- Abstract summary: We apply InfoCal to the task of predicting hospital readmission using hospital discharge notes.
We find each presented model with selected interpretability or feature importance methods yield varying results.
- Score: 31.66739991129112
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The development of neural networks for clinical artificial intelligence (AI)
is reliant on interpretability, transparency, and performance. The need to
delve into the black-box neural network and derive interpretable explanations
of model output is paramount. A task of high clinical importance is predicting
the likelihood of a patient being readmitted to hospital in the near future to
enable efficient triage. With the increasing adoption of electronic health
records (EHRs), there is great interest in applications of natural language
processing (NLP) to clinical free-text contained within EHRs. In this work, we
apply InfoCal, the current state-of-the-art model that produces extractive
rationales for its predictions, to the task of predicting hospital readmission
using hospital discharge notes. We compare extractive rationales produced by
InfoCal to competitive transformer-based models pretrained on clinical text
data and for which the attention mechanism can be used for interpretation. We
find each presented model with selected interpretability or feature importance
methods yield varying results, with clinical language domain expertise and
pretraining critical to performance and subsequent interpretability.
Related papers
- Leveraging Large Language Models through Natural Language Processing to provide interpretable Machine Learning predictions of mental deterioration in real time [5.635300481123079]
Based on official estimates, 50 million people worldwide are affected by dementia, and this number increases by 10 million new patients every year.
To this end, Artificial Intelligence and computational linguistics can be exploited for natural language analysis, personalized assessment, monitoring, and treatment.
We contribute with an affordable, flexible, non-invasive, personalized diagnostic system to this work.
arXiv Detail & Related papers (2024-09-05T09:27:05Z) - Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - An Interpretable Deep-Learning Framework for Predicting Hospital
Readmissions From Electronic Health Records [2.156208381257605]
We propose a novel, interpretable deep-learning framework for predicting unplanned hospital readmissions.
We validate our system on the two predictive tasks of hospital readmission within 30 and 180 days, using real-world data.
arXiv Detail & Related papers (2023-10-16T08:48:52Z) - QXAI: Explainable AI Framework for Quantitative Analysis in Patient
Monitoring Systems [9.29069202652354]
An Explainable AI for Quantitative analysis (QXAI) framework is proposed with post-hoc model explainability and intrinsic explainability for regression and classification tasks.
We adopted the artificial neural networks (ANN) and attention-based Bidirectional LSTM (BiLSTM) models for the prediction of heart rate and classification of physical activities based on sensor data.
arXiv Detail & Related papers (2023-09-19T03:50:30Z) - Self-Verification Improves Few-Shot Clinical Information Extraction [73.6905567014859]
Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning.
They still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health.
Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs.
arXiv Detail & Related papers (2023-05-30T22:05:11Z) - SPeC: A Soft Prompt-Based Calibration on Performance Variability of
Large Language Model in Clinical Notes Summarization [50.01382938451978]
We introduce a model-agnostic pipeline that employs soft prompts to diminish variance while preserving the advantages of prompt-based summarization.
Experimental findings indicate that our method not only bolsters performance but also effectively curbs variance for various language models.
arXiv Detail & Related papers (2023-03-23T04:47:46Z) - This Patient Looks Like That Patient: Prototypical Networks for
Interpretable Diagnosis Prediction from Clinical Text [56.32427751440426]
In clinical practice such models must not only be accurate, but provide doctors with interpretable and helpful results.
We introduce ProtoPatient, a novel method based on prototypical networks and label-wise attention.
We evaluate the model on two publicly available clinical datasets and show that it outperforms existing baselines.
arXiv Detail & Related papers (2022-10-16T10:12:07Z) - DR.BENCH: Diagnostic Reasoning Benchmark for Clinical Natural Language
Processing [5.022185333260402]
Diagnostic Reasoning Benchmarks, DR.BENCH, is a new benchmark for developing and evaluating cNLP models with clinical diagnostic reasoning ability.
DR.BENCH is the first clinical suite of tasks designed to be a natural language generation framework to evaluate pre-trained language models.
arXiv Detail & Related papers (2022-09-29T16:05:53Z) - A Multimodal Transformer: Fusing Clinical Notes with Structured EHR Data
for Interpretable In-Hospital Mortality Prediction [8.625186194860696]
We provide a novel multimodal transformer to fuse clinical notes and structured EHR data for better prediction of in-hospital mortality.
To improve interpretability, we propose an integrated gradients (IG) method to select important words in clinical notes.
We also investigate the significance of domain adaptive pretraining and task adaptive fine-tuning on the Clinical BERT.
arXiv Detail & Related papers (2022-08-09T03:49:52Z) - VBridge: Connecting the Dots Between Features, Explanations, and Data
for Healthcare Models [85.4333256782337]
VBridge is a visual analytics tool that seamlessly incorporates machine learning explanations into clinicians' decision-making workflow.
We identified three key challenges, including clinicians' unfamiliarity with ML features, lack of contextual information, and the need for cohort-level evidence.
We demonstrated the effectiveness of VBridge through two case studies and expert interviews with four clinicians.
arXiv Detail & Related papers (2021-08-04T17:34:13Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.