Multi-Modality Cardiac Image Computing: A Survey
- URL: http://arxiv.org/abs/2208.12881v1
- Date: Fri, 26 Aug 2022 22:19:50 GMT
- Title: Multi-Modality Cardiac Image Computing: A Survey
- Authors: Lei Li and Wangbin Ding and Liqun Huang and Xiahai Zhuang and Vicente
Grau
- Abstract summary: Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases.
Fully-automated processing and quantitative analysis of multi-modality cardiac images could have a direct impact on clinical research and evidence-based patient management.
- Score: 18.92646939242613
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Multi-modality cardiac imaging plays a key role in the management of patients
with cardiovascular diseases. It allows a combination of complementary
anatomical, morphological and functional information, increases diagnosis
accuracy, and improves the efficacy of cardiovascular interventions and
clinical outcomes. Fully-automated processing and quantitative analysis of
multi-modality cardiac images could have a direct impact on clinical research
and evidence-based patient management. However, these require overcoming
significant challenges including inter-modality misalignment and finding
optimal methods to integrate information from different modalities.
This paper aims to provide a comprehensive review of multi-modality imaging
in cardiology, the computing methods, the validation strategies, the related
clinical workflows and future perspectives. For the computing methodologies, we
have a favored focus on the three tasks, i.e., registration, fusion and
segmentation, which generally involve multi-modality imaging data,
\textit{either combining information from different modalities or transferring
information across modalities}. The review highlights that multi-modality
cardiac imaging data has the potential of wide applicability in the clinic,
such as trans-aortic valve implantation guidance, myocardial viability
assessment, and catheter ablation therapy and its patient selection.
Nevertheless, many challenges remain unsolved, such as missing modality,
combination of imaging and non-imaging data, and uniform analysis and
representation of different modalities. There is also work to do in defining
how the well-developed techniques fit in clinical workflows and how much
additional and relevant information they introduce. These problems are likely
to continue to be an active field of research and the questions to be answered
in the future.
Related papers
- Enhancing Cardiovascular Disease Prediction through Multi-Modal Self-Supervised Learning [0.17708284654788597]
We propose a comprehensive framework for enhancing cardiovascular disease prediction with limited annotated datasets.
We employ a masked autoencoder to pre-train the electrocardiogram ECG encoder, enabling it to extract relevant features from raw electrocardiogram data.
We fine-tuned the pre-trained encoders on specific predictive tasks, such as myocardial infarction.
arXiv Detail & Related papers (2024-11-08T16:32:30Z) - Towards a vision foundation model for comprehensive assessment of Cardiac MRI [11.838157772803282]
We introduce a vision foundation model trained for cardiac magnetic resonance imaging (CMR) assessment.
We finetune the model in supervised way for 9 clinical tasks typical to a CMR workflow.
We demonstrate improved accuracy and robustness across all tasks, over a range of available labeled dataset sizes.
arXiv Detail & Related papers (2024-10-02T15:32:01Z) - HyperFusion: A Hypernetwork Approach to Multimodal Integration of Tabular and Medical Imaging Data for Predictive Modeling [4.44283662576491]
We present a novel framework based on hypernetworks to fuse clinical imaging and tabular data by conditioning the image processing on the EHR's values and measurements.
We show that our framework outperforms both single-modality models and state-of-the-art MRI-tabular data fusion methods.
arXiv Detail & Related papers (2024-03-20T05:50:04Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - Modality-Agnostic Learning for Medical Image Segmentation Using
Multi-modality Self-distillation [1.815047691981538]
We propose a novel framework, Modality-Agnostic learning through Multi-modality Self-dist-illation (MAG-MS)
MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities.
Our experiments on benchmark datasets demonstrate the high efficiency of MAG-MS and its superior segmentation performance.
arXiv Detail & Related papers (2023-06-06T14:48:50Z) - Deep Multi-modal Fusion of Image and Non-image Data in Disease Diagnosis
and Prognosis: A Review [8.014632186417423]
The rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians to handle and integrate the heterogeneous, yet complementary data produced during routine practice.
With the recent advances in multi-modal deep learning technologies, an increasingly large number of efforts have been devoted to a key question: how do we extract and aggregate multi-modal information to ultimately provide more objective, quantitative computer-aided clinical decision making?
This review will include the (1) overview of current multi-modal learning, (2) summarization of multi-modal fusion methods, (3) discussion of the performance, (4) applications in disease diagnosis and prognosis, and (5) challenges and future
arXiv Detail & Related papers (2022-03-25T18:50:03Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.