Review of multimodal machine learning approaches in healthcare
- URL: http://arxiv.org/abs/2402.02460v2
- Date: Mon, 12 Feb 2024 01:10:12 GMT
- Title: Review of multimodal machine learning approaches in healthcare
- Authors: Felix Krones, Umar Marikkar, Guy Parsons, Adam Szmul, Adam Mahdi
- Abstract summary: Clinicians rely on a variety of data sources to make informed decisions.
Recent advances in machine learning have facilitated the more efficient incorporation of multimodal data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning methods in healthcare have traditionally focused on using
data from a single modality, limiting their ability to effectively replicate
the clinical practice of integrating multiple sources of information for
improved decision making. Clinicians typically rely on a variety of data
sources including patients' demographic information, laboratory data, vital
signs and various imaging data modalities to make informed decisions and
contextualise their findings. Recent advances in machine learning have
facilitated the more efficient incorporation of multimodal data, resulting in
applications that better represent the clinician's approach. Here, we provide a
review of multimodal machine learning approaches in healthcare, offering a
comprehensive overview of recent literature. We discuss the various data
modalities used in clinical diagnosis, with a particular emphasis on imaging
data. We evaluate fusion techniques, explore existing multimodal datasets and
examine common training strategies.
Related papers
- A Survey of Medical Vision-and-Language Applications and Their Techniques [48.268198631277315]
Medical vision-and-language models (MVLMs) have attracted substantial interest due to their capability to offer a natural language interface for interpreting complex medical data.
Here, we provide a comprehensive overview of MVLMs and the various medical tasks to which they have been applied.
We also examine the datasets used for these tasks and compare the performance of different models based on standardized evaluation metrics.
arXiv Detail & Related papers (2024-11-19T03:27:05Z) - Automated Ensemble Multimodal Machine Learning for Healthcare [52.500923923797835]
We introduce a multimodal framework, AutoPrognosis-M, that enables the integration of structured clinical (tabular) data and medical imaging using automated machine learning.
AutoPrognosis-M incorporates 17 imaging models, including convolutional neural networks and vision transformers, and three distinct multimodal fusion strategies.
arXiv Detail & Related papers (2024-07-25T17:46:38Z) - A Survey of Deep Learning-based Radiology Report Generation Using Multimodal Data [41.8344712915454]
Automatic radiology report generation can alleviate the workload for physicians and minimize regional disparities in medical resources.
It is a challenging task, as the computational model needs to mimic physicians to obtain information from multi-modal input data.
Recent works emerged to address this issue using deep learning-based methods, such as transformers, contrastive learning, and knowledge-base construction.
This survey summarizes the key techniques developed in the most recent works and proposes a general workflow for deep learning-based report generation.
arXiv Detail & Related papers (2024-05-21T14:37:35Z) - A Transformer-based representation-learning model with unified
processing of multimodal input for clinical diagnostics [63.106382317917344]
We report a Transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner.
The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary diseases.
arXiv Detail & Related papers (2023-06-01T16:23:47Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Medical Diagnosis with Large Scale Multimodal Transformers: Leveraging
Diverse Data for More Accurate Diagnosis [0.15776842283814416]
We present a new technical approach of "learnable synergies"
Our approach is easily scalable and naturally adapts to multimodal data inputs from clinical routine.
It outperforms state-of-the-art models in clinically relevant diagnosis tasks.
arXiv Detail & Related papers (2022-12-18T20:43:37Z) - Multimodal Learning for Multi-Omics: A Survey [4.15790071124993]
Multimodal learning for integrative multi-omics analysis can help researchers and practitioners gain deep insights into human diseases.
However, several challenges are hindering the development in this area, including the availability of easily accessible open-source tools.
This survey aims to provide an up-to-date overview of the data challenges, fusion approaches, datasets, and software tools from several new perspectives.
arXiv Detail & Related papers (2022-11-29T12:08:06Z) - Artificial Intelligence-Based Methods for Fusion of Electronic Health
Records and Imaging Data [0.9749560288448113]
We focus on synthesizing and analyzing the literature that uses AI techniques to fuse multimodal medical data for different clinical applications.
We present a comprehensive analysis of the various fusion strategies, the diseases and clinical outcomes for which multimodal fusion was used, and the available multimodal medical datasets.
arXiv Detail & Related papers (2022-10-23T07:13:37Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.