VBridge: Connecting the Dots Between Features, Explanations, and Data
for Healthcare Models
- URL: http://arxiv.org/abs/2108.02550v1
- Date: Wed, 4 Aug 2021 17:34:13 GMT
- Title: VBridge: Connecting the Dots Between Features, Explanations, and Data
for Healthcare Models
- Authors: Furui Cheng, Dongyu Liu, Fan Du, Yanna Lin, Alexandra Zytek, Haomin
Li, Huamin Qu, Kalyan Veeramachaneni
- Abstract summary: VBridge is a visual analytics tool that seamlessly incorporates machine learning explanations into clinicians' decision-making workflow.
We identified three key challenges, including clinicians' unfamiliarity with ML features, lack of contextual information, and the need for cohort-level evidence.
We demonstrated the effectiveness of VBridge through two case studies and expert interviews with four clinicians.
- Score: 85.4333256782337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) is increasingly applied to Electronic Health Records
(EHRs) to solve clinical prediction tasks. Although many ML models perform
promisingly, issues with model transparency and interpretability limit their
adoption in clinical practice. Directly using existing explainable ML
techniques in clinical settings can be challenging. Through literature surveys
and collaborations with six clinicians with an average of 17 years of clinical
experience, we identified three key challenges, including clinicians'
unfamiliarity with ML features, lack of contextual information, and the need
for cohort-level evidence. Following an iterative design process, we further
designed and developed VBridge, a visual analytics tool that seamlessly
incorporates ML explanations into clinicians' decision-making workflow. The
system includes a novel hierarchical display of contribution-based feature
explanations and enriched interactions that connect the dots between ML
features, explanations, and data. We demonstrated the effectiveness of VBridge
through two case studies and expert interviews with four clinicians, showing
that visually associating model explanations with patients' situational records
can help clinicians better interpret and use model predictions when making
clinician decisions. We further derived a list of design implications for
developing future explainable ML tools to support clinical decision-making.
Related papers
- Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - Large Language Models in the Clinic: A Comprehensive Benchmark [63.21278434331952]
We build a benchmark ClinicBench to better understand large language models (LLMs) in the clinic.
We first collect eleven existing datasets covering diverse clinical language generation, understanding, and reasoning tasks.
We then construct six novel datasets and clinical tasks that are complex but common in real-world practice.
We conduct an extensive evaluation of twenty-two LLMs under both zero-shot and few-shot settings.
arXiv Detail & Related papers (2024-04-25T15:51:06Z) - Dynamic Q&A of Clinical Documents with Large Language Models [3.021316686584699]
This work introduces a natural language interface using large language models (LLMs) for dynamic question-answering on clinical notes.
Experiments, utilizing various embedding models and advanced LLMs, show Wizard Vicuna's superior accuracy, albeit with high compute demands.
arXiv Detail & Related papers (2024-01-19T14:50:22Z) - Clairvoyance: A Pipeline Toolkit for Medical Time Series [95.22483029602921]
Time-series learning is the bread and butter of data-driven *clinical decision support*
Clairvoyance proposes a unified, end-to-end, autoML-friendly pipeline that serves as a software toolkit.
Clairvoyance is the first to demonstrate viability of a comprehensive and automatable pipeline for clinical time-series ML.
arXiv Detail & Related papers (2023-10-28T12:08:03Z) - Adapted Large Language Models Can Outperform Medical Experts in Clinical Text Summarization [8.456700096020601]
Large language models (LLMs) have shown promise in natural language processing (NLP), but their effectiveness on a diverse range of clinical summarization tasks remains unproven.
In this study, we apply adaptation methods to eight LLMs, spanning four distinct clinical summarization tasks.
A clinical reader study with ten physicians evaluates summary, completeness, correctness, and conciseness; in a majority of cases, summaries from our best adapted LLMs are either equivalent (45%) or superior (36%) compared to summaries from medical experts.
arXiv Detail & Related papers (2023-09-14T05:15:01Z) - A Transformer-based representation-learning model with unified
processing of multimodal input for clinical diagnostics [63.106382317917344]
We report a Transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner.
The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary diseases.
arXiv Detail & Related papers (2023-06-01T16:23:47Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Context-dependent Explainability and Contestability for Trustworthy
Medical Artificial Intelligence: Misclassification Identification of
Morbidity Recognition Models in Preterm Infants [0.0]
Explainable AI (XAI) aims to address this requirement by clarifying AI reasoning to support the end users.
We built our methodology on three main pillars: decomposing the feature set by leveraging clinical context latent space, assessing the clinical association of global explanations, and Latent Space Similarity (LSS) based local explanations.
arXiv Detail & Related papers (2022-12-17T07:59:09Z) - Modelling Patient Trajectories Using Multimodal Information [0.0]
We propose a solution to model patient trajectories that combines different types of information and considers the temporal aspect of clinical data.
The developed solution was evaluated on two different clinical outcomes, unexpected patient readmission and disease progression.
arXiv Detail & Related papers (2022-09-09T10:20:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.