Democratizing Artificial Intelligence in Healthcare: A Study of Model
Development Across Two Institutions Incorporating Transfer Learning
- URL: http://arxiv.org/abs/2009.12437v1
- Date: Fri, 25 Sep 2020 21:12:50 GMT
- Title: Democratizing Artificial Intelligence in Healthcare: A Study of Model
Development Across Two Institutions Incorporating Transfer Learning
- Authors: Vikash Gupta1 and Holger Roth and Varun Buch3 and Marcio A.B.C.
Rockenbach and Richard D White and Dong Yang and Olga Laur and Brian
Ghoshhajra and Ittai Dayan and Daguang Xu and Mona G. Flores and Barbaros
Selnur Erdal
- Abstract summary: Transfer learning (TL) allows a fully trained model from one institution to be fine-tuned by another institution using a much small local dataset.
This report describes the challenges, methodology, and benefits of TL within the context of developing an AI model for a basic use-case.
- Score: 8.043077408518826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The training of deep learning models typically requires extensive data, which
are not readily available as large well-curated medical-image datasets for
development of artificial intelligence (AI) models applied in Radiology.
Recognizing the potential for transfer learning (TL) to allow a fully trained
model from one institution to be fine-tuned by another institution using a much
small local dataset, this report describes the challenges, methodology, and
benefits of TL within the context of developing an AI model for a basic
use-case, segmentation of Left Ventricular Myocardium (LVM) on images from
4-dimensional coronary computed tomography angiography. Ultimately, our results
from comparisons of LVM segmentation predicted by a model locally trained using
random initialization, versus one training-enhanced by TL, showed that a
use-case model initiated by TL can be developed with sparse labels with
acceptable performance. This process reduces the time required to build a new
model in the clinical environment at a different institution.
Related papers
- Leveraging Computational Pathology AI for Noninvasive Optical Imaging Analysis Without Retraining [3.6835809728620634]
Noninvasive optical imaging modalities can probe patient's tissue in 3D and over time generate gigabytes of clinically relevant data per sample.
There is a need for AI models to analyze this data and assist clinical workflow.
In this paper we introduce FoundationShift, a method to apply any AI model from computational pathology without retraining.
arXiv Detail & Related papers (2024-11-18T14:35:01Z) - Automated Generation of High-Quality Medical Simulation Scenarios Through Integration of Semi-Structured Data and Large Language Models [0.0]
This study introduces a transformative framework for medical education by integrating semi-structured data with Large Language Models (LLMs)
The proposed approach utilizes AI to efficiently generate detailed, clinically relevant scenarios that are tailored to specific educational objectives.
arXiv Detail & Related papers (2024-04-30T17:06:11Z) - Towards a clinically accessible radiology foundation model: open-access and lightweight, with automated evaluation [113.5002649181103]
Training open-source small multimodal models (SMMs) to bridge competency gaps for unmet clinical needs in radiology.
For training, we assemble a large dataset of over 697 thousand radiology image-text pairs.
For evaluation, we propose CheXprompt, a GPT-4-based metric for factuality evaluation, and demonstrate its parity with expert evaluation.
The inference of LlaVA-Rad is fast and can be performed on a single V100 GPU in private settings, offering a promising state-of-the-art tool for real-world clinical applications.
arXiv Detail & Related papers (2024-03-12T18:12:02Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Self-supervised Multi-modal Training from Uncurated Image and Reports
Enables Zero-shot Oversight Artificial Intelligence in Radiology [31.045221580446963]
We present a model dubbed Medical Cross-attention Vision-Language model (Medical X-VL)
Our model enables various zero-shot tasks for oversight AI, ranging from the zero-shot classification to zero-shot error correction.
Our method was especially successful in the data-limited setting, suggesting the potential widespread applicability in medical domain.
arXiv Detail & Related papers (2022-08-10T04:35:58Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations [2.3204178451683264]
Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
arXiv Detail & Related papers (2021-05-28T21:34:04Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Interpretable and synergistic deep learning for visual explanation and
statistical estimations of segmentation of disease features from medical
images [0.0]
Deep learning (DL) models for disease classification or segmentation from medical images are increasingly trained using transfer learning (TL) from unrelated natural world images.
We report detailed comparisons, rigorous statistical analysis and comparisons of widely used DL architecture for binary segmentation after TL.
A free GitHub repository of TII and LMI models, code and more than 10,000 medical images and their Grad-CAM output from this study can be used as starting points for advanced computational medicine.
arXiv Detail & Related papers (2020-11-11T14:08:17Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.