Interpretable and intervenable ultrasonography-based machine learning
models for pediatric appendicitis
- URL: http://arxiv.org/abs/2302.14460v3
- Date: Fri, 24 Nov 2023 15:25:26 GMT
- Title: Interpretable and intervenable ultrasonography-based machine learning
models for pediatric appendicitis
- Authors: Ri\v{c}ards Marcinkevi\v{c}s, Patricia Reis Wolfertstetter, Ugne
Klimiene, Kieran Chin-Cheong, Alyssia Paschke, Julia Zerres, Markus
Denzinger, David Niederberger, Sven Wellmann, Ece Ozkan, Christian Knorr,
Julia E. Vogt
- Abstract summary: Appendicitis is among the most frequent reasons for pediatric abdominal surgeries.
Previous decision support systems for appendicitis have focused on clinical, laboratory, scoring, and computed tomography data.
We present interpretable machine learning models for predicting the diagnosis, management and severity of suspected appendicitis using ultrasound images.
- Score: 8.083060080133842
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Appendicitis is among the most frequent reasons for pediatric abdominal
surgeries. Previous decision support systems for appendicitis have focused on
clinical, laboratory, scoring, and computed tomography data and have ignored
abdominal ultrasound, despite its noninvasive nature and widespread
availability. In this work, we present interpretable machine learning models
for predicting the diagnosis, management and severity of suspected appendicitis
using ultrasound images. Our approach utilizes concept bottleneck models (CBM)
that facilitate interpretation and interaction with high-level concepts
understandable to clinicians. Furthermore, we extend CBMs to prediction
problems with multiple views and incomplete concept sets. Our models were
trained on a dataset comprising 579 pediatric patients with 1709 ultrasound
images accompanied by clinical and laboratory data. Results show that our
proposed method enables clinicians to utilize a human-understandable and
intervenable predictive model without compromising performance or requiring
time-consuming image annotation when deployed. For predicting the diagnosis,
the extended multiview CBM attained an AUROC of 0.80 and an AUPR of 0.92,
performing comparably to similar black-box neural networks trained and tested
on the same dataset.
Related papers
- Bridging the Diagnostic Divide: Classical Computer Vision and Advanced AI methods for distinguishing ITB and CD through CTE Scans [2.900410045439515]
A consensus among radiologists has recognized the visceral-to-subcutaneous fat ratio as a surrogate biomarker for differentiating between ITB and CD.
We propose a novel 2D image computer vision algorithm for auto-segmenting subcutaneous fat to automate this ratio calculation.
We trained a ResNet10 model on a dataset of CTE scans with samples from ITB, CD, and normal patients, achieving an accuracy of 75%.
arXiv Detail & Related papers (2024-10-23T17:05:27Z) - Multi-task Learning Approach for Intracranial Hemorrhage Prognosis [0.0]
We propose a 3D multi-task image model to predict prognosis, Glasgow Coma Scale and age, improving accuracy and interpretability.
Our method outperforms current state-of-the-art baseline image models, and demonstrates superior performance in ICH prognosis compared to four board-certified neuroradiologists using only CT scans as input.
arXiv Detail & Related papers (2024-08-16T14:56:17Z) - Goal-conditioned reinforcement learning for ultrasound navigation guidance [4.648318344224063]
We propose a novel ultrasound navigation assistance method based on contrastive learning as goal-conditioned reinforcement learning (G)
We augment the previous framework using a novel contrastive patient method (CPB) and a data-augmented contrastive loss.
Our method was developed with a large dataset of 789 patients and obtained an average error of 6.56 mm in position and 9.36 degrees in angle.
arXiv Detail & Related papers (2024-05-02T16:01:58Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - This Patient Looks Like That Patient: Prototypical Networks for
Interpretable Diagnosis Prediction from Clinical Text [56.32427751440426]
In clinical practice such models must not only be accurate, but provide doctors with interpretable and helpful results.
We introduce ProtoPatient, a novel method based on prototypical networks and label-wise attention.
We evaluate the model on two publicly available clinical datasets and show that it outperforms existing baselines.
arXiv Detail & Related papers (2022-10-16T10:12:07Z) - An Ensemble Approach for Patient Prognosis of Head and Neck Tumor Using
Multimodal Data [0.0]
We propose a multimodal network that ensembles deep multi-task logistic regression (MTLR), Cox proportional hazard (CoxPH) and CNN models to predict prognostic outcomes for patients with head and neck tumors.
Our proposed ensemble solution achieves a C-index of 0.72 on The HECKTOR test set that saved us the first place in prognosis task of the HECKTOR challenge.
arXiv Detail & Related papers (2022-02-25T07:50:59Z) - Multi-task fusion for improving mammography screening data
classification [3.7683182861690843]
We propose a pipeline approach, where we first train a set of individual, task-specific models.
We then investigate the fusion thereof, which is in contrast to the standard model ensembling strategy.
Our fusion approaches improve AUC scores significantly by up to 0.04 compared to standard model ensembling.
arXiv Detail & Related papers (2021-12-01T13:56:27Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.