Artificial Intelligence-Based Methods for Fusion of Electronic Health
Records and Imaging Data
- URL: http://arxiv.org/abs/2210.13462v1
- Date: Sun, 23 Oct 2022 07:13:37 GMT
- Title: Artificial Intelligence-Based Methods for Fusion of Electronic Health
Records and Imaging Data
- Authors: Farida Mohsen, Hazrat Ali, Nady El Hajj, Zubair Shah
- Abstract summary: We focus on synthesizing and analyzing the literature that uses AI techniques to fuse multimodal medical data for different clinical applications.
We present a comprehensive analysis of the various fusion strategies, the diseases and clinical outcomes for which multimodal fusion was used, and the available multimodal medical datasets.
- Score: 0.9749560288448113
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Healthcare data are inherently multimodal, including electronic health
records (EHR), medical images, and multi-omics data. Combining these multimodal
data sources contributes to a better understanding of human health and provides
optimal personalized healthcare. Advances in artificial intelligence (AI)
technologies, particularly machine learning (ML), enable the fusion of these
different data modalities to provide multimodal insights. To this end, in this
scoping review, we focus on synthesizing and analyzing the literature that uses
AI techniques to fuse multimodal medical data for different clinical
applications. More specifically, we focus on studies that only fused EHR with
medical imaging data to develop various AI methods for clinical applications.
We present a comprehensive analysis of the various fusion strategies, the
diseases and clinical outcomes for which multimodal fusion was used, the ML
algorithms used to perform multimodal fusion for each clinical application, and
the available multimodal medical datasets. We followed the PRISMA-ScR
guidelines. We searched Embase, PubMed, Scopus, and Google Scholar to retrieve
relevant studies. We extracted data from 34 studies that fulfilled the
inclusion criteria. In our analysis, a typical workflow was observed: feeding
raw data, fusing different data modalities by applying conventional machine
learning (ML) or deep learning (DL) algorithms, and finally, evaluating the
multimodal fusion through clinical outcome predictions. Specifically, early
fusion was the most used technique in most applications for multimodal learning
(22 out of 34 studies). We found that multimodality fusion models outperformed
traditional single-modality models for the same task. Disease diagnosis and
prediction were the most common clinical outcomes (reported in 20 and 10
studies, respectively) from a clinical outcome perspective.
Related papers
- Automated Ensemble Multimodal Machine Learning for Healthcare [52.500923923797835]
We introduce a multimodal framework, AutoPrognosis-M, that enables the integration of structured clinical (tabular) data and medical imaging using automated machine learning.
AutoPrognosis-M incorporates 17 imaging models, including convolutional neural networks and vision transformers, and three distinct multimodal fusion strategies.
arXiv Detail & Related papers (2024-07-25T17:46:38Z) - HyperFusion: A Hypernetwork Approach to Multimodal Integration of Tabular and Medical Imaging Data for Predictive Modeling [4.44283662576491]
We present a novel framework based on hypernetworks to fuse clinical imaging and tabular data by conditioning the image processing on the EHR's values and measurements.
We show that our framework outperforms both single-modality models and state-of-the-art MRI-tabular data fusion methods.
arXiv Detail & Related papers (2024-03-20T05:50:04Z) - OpenMEDLab: An Open-source Platform for Multi-modality Foundation Models
in Medicine [55.29668193415034]
We present OpenMEDLab, an open-source platform for multi-modality foundation models.
It encapsulates solutions of pioneering attempts in prompting and fine-tuning large language and vision models for frontline clinical and bioinformatic applications.
It opens access to a group of pre-trained foundation models for various medical image modalities, clinical text, protein engineering, etc.
arXiv Detail & Related papers (2024-02-28T03:51:02Z) - Review of multimodal machine learning approaches in healthcare [0.0]
Clinicians rely on a variety of data sources to make informed decisions.
Recent advances in machine learning have facilitated the more efficient incorporation of multimodal data.
arXiv Detail & Related papers (2024-02-04T12:21:38Z) - XAI for In-hospital Mortality Prediction via Multimodal ICU Data [57.73357047856416]
We propose an efficient, explainable AI solution for predicting in-hospital mortality via multimodal ICU data.
We employ multimodal learning in our framework, which can receive heterogeneous inputs from clinical data and make decisions.
Our framework can be easily transferred to other clinical tasks, which facilitates the discovery of crucial factors in healthcare research.
arXiv Detail & Related papers (2023-12-29T14:28:04Z) - INSPECT: A Multimodal Dataset for Pulmonary Embolism Diagnosis and
Prognosis [19.32686665459374]
We introduce INSPECT, which contains de-identified longitudinal records from a large cohort of patients at risk for pulmonary embolism (PE)
INSPECT contains data from 19,402 patients, including CT images, radiology report impression sections, and structured electronic health record (EHR) data (i.e. demographics, diagnoses, procedures, vitals, and medications)
arXiv Detail & Related papers (2023-11-17T07:28:16Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Medical Diagnosis with Large Scale Multimodal Transformers: Leveraging
Diverse Data for More Accurate Diagnosis [0.15776842283814416]
We present a new technical approach of "learnable synergies"
Our approach is easily scalable and naturally adapts to multimodal data inputs from clinical routine.
It outperforms state-of-the-art models in clinically relevant diagnosis tasks.
arXiv Detail & Related papers (2022-12-18T20:43:37Z) - SEMPAI: a Self-Enhancing Multi-Photon Artificial Intelligence for
prior-informed assessment of muscle function and pathology [48.54269377408277]
We introduce the Self-Enhancing Multi-Photon Artificial Intelligence (SEMPAI), that integrates hypothesis-driven priors in a data-driven Deep Learning approach.
SEMPAI performs joint learning of several tasks to enable prediction for small datasets.
SEMPAI outperforms state-of-the-art biomarkers in six of seven predictive tasks, including those with scarce data.
arXiv Detail & Related papers (2022-10-28T17:03:04Z) - Multimodal Machine Learning in Precision Health [10.068890037410316]
This review was conducted to summarize this field and identify topics ripe for future research.
We used a combination of content analysis and literature searches to establish search strings and databases of PubMed, Google Scholar, and IEEEXplore from 2011 to 2021.
The most common form of information fusion was early fusion. Notably, there was an improvement in predictive performance performing heterogeneous data fusion.
arXiv Detail & Related papers (2022-04-10T21:56:07Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.