A Machine Learning Case Study for AI-empowered echocardiography of
Intensive Care Unit Patients in low- and middle-income countries
- URL: http://arxiv.org/abs/2212.14510v1
- Date: Fri, 30 Dec 2022 01:41:48 GMT
- Title: A Machine Learning Case Study for AI-empowered echocardiography of
Intensive Care Unit Patients in low- and middle-income countries
- Authors: Xochicale Miguel and Thwaites Louise and Yacoub Sophie and Pisani
Luigi and Tran Huy Nhat Phung and Kerdegari Hamideh and King Andrew and Gomez
Alberto
- Abstract summary: We present a Machine Learning (ML) study case to illustrate the challenges of clinical translation for a real-time AI-empowered echocardiography system with data of ICU patients in LMICs.
Data preparation, curation and labelling from 2D Ultrasound videos of 31 ICU patients in LMICs and model selection, validation and deployment of three thinner neural networks to classify apical four-chamber view.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a Machine Learning (ML) study case to illustrate the challenges of
clinical translation for a real-time AI-empowered echocardiography system with
data of ICU patients in LMICs. Such ML case study includes data preparation,
curation and labelling from 2D Ultrasound videos of 31 ICU patients in LMICs
and model selection, validation and deployment of three thinner neural networks
to classify apical four-chamber view. Results of the ML heuristics showed the
promising implementation, validation and application of thinner networks to
classify 4CV with limited datasets. We conclude this work mentioning the need
for (a) datasets to improve diversity of demographics, diseases, and (b) the
need of further investigations of thinner models to be run and implemented in
low-cost hardware to be clinically translated in the ICU in LMICs. The code and
other resources to reproduce this work are available at
https://github.com/vital-ultrasound/ai-assisted-echocardiography-for-low-resource-countries.
Related papers
- When Raw Data Prevails: Are Large Language Model Embeddings Effective in Numerical Data Representation for Medical Machine Learning Applications? [8.89829757177796]
We examine the effectiveness of vector representations from last hidden states of Large Language Models for medical diagnostics and prognostics.
We focus on instruction-tuned LLMs in a zero-shot setting to represent abnormal physiological data and evaluate their utilities as feature extractors.
Although findings suggest the raw data features still prevails in medical ML tasks, zero-shot LLM embeddings demonstrate competitive results.
arXiv Detail & Related papers (2024-08-15T03:56:40Z) - Unsupervised Training of Neural Cellular Automata on Edge Devices [2.5462695047893025]
We implement Cellular Automata training directly on smartphones for accessible X-ray lung segmentation.
We confirm the practicality and feasibility of deploying and training these advanced models on five Android devices.
In extreme cases where no digital copy is available and images must be captured by a phone from an X-ray lightbox or monitor, VWSL enhances Dice accuracy by 5-20%.
arXiv Detail & Related papers (2024-07-25T15:21:54Z) - Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - Self-Supervised Multiple Instance Learning for Acute Myeloid Leukemia Classification [1.1874560263468232]
Diseases like Acute Myeloid Leukemia (AML) pose challenges due to scarce and costly annotations on a single-cell level.
Multiple Instance Learning (MIL) addresses weakly labeled scenarios but necessitates powerful encoders typically trained with labeled data.
In this study, we explore Self-Supervised Learning (SSL) as a pre-training approach for MIL-based subtype AML classification from blood smears.
arXiv Detail & Related papers (2024-03-08T15:16:15Z) - Evaluating the Fairness of the MIMIC-IV Dataset and a Baseline
Algorithm: Application to the ICU Length of Stay Prediction [65.268245109828]
This paper uses the MIMIC-IV dataset to examine the fairness and bias in an XGBoost binary classification model predicting the ICU length of stay.
The research reveals class imbalances in the dataset across demographic attributes and employs data preprocessing and feature extraction.
The paper concludes with recommendations for fairness-aware machine learning techniques for mitigating biases and the need for collaborative efforts among healthcare professionals and data scientists.
arXiv Detail & Related papers (2023-12-31T16:01:48Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Predicting Cardiovascular Complications in Post-COVID-19 Patients Using
Data-Driven Machine Learning Models [0.0]
The COVID-19 pandemic has globally posed numerous health challenges, notably the emergence of post-COVID-19 cardiovascular complications.
This study addresses this by utilizing data-driven machine learning models to predict such complications in 352 post-COVID-19 patients from Iraq.
arXiv Detail & Related papers (2023-09-27T22:52:08Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Successive Subspace Learning for Cardiac Disease Classification with
Two-phase Deformation Fields from Cine MRI [36.044984400761535]
This work proposes a lightweight successive subspace learning framework for CVD classification.
It is based on an interpretable feedforward design, in conjunction with a cardiac atlas.
Compared with 3D CNN-based approaches, our framework achieves superior classification performance with 140$times$ fewer parameters.
arXiv Detail & Related papers (2023-01-21T15:00:59Z) - VBridge: Connecting the Dots Between Features, Explanations, and Data
for Healthcare Models [85.4333256782337]
VBridge is a visual analytics tool that seamlessly incorporates machine learning explanations into clinicians' decision-making workflow.
We identified three key challenges, including clinicians' unfamiliarity with ML features, lack of contextual information, and the need for cohort-level evidence.
We demonstrated the effectiveness of VBridge through two case studies and expert interviews with four clinicians.
arXiv Detail & Related papers (2021-08-04T17:34:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.