Whole Heart 3D+T Representation Learning Through Sparse 2D Cardiac MR Images
- URL: http://arxiv.org/abs/2406.00329v2
- Date: Thu, 6 Jun 2024 15:27:12 GMT
- Title: Whole Heart 3D+T Representation Learning Through Sparse 2D Cardiac MR Images
- Authors: Yundi Zhang, Chen Chen, Suprosanna Shit, Sophie Starck, Daniel Rueckert, Jiazhen Pan,
- Abstract summary: We introduce a whole-heart self-supervised learning framework to automatically uncover the correlations between spatial and temporal patches throughout the cardiac stacks.
We train our model on 14,000 unlabeled CMR data from UK BioBank and evaluate it on 1,000 annotated data.
- Score: 13.686473040836113
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cardiac Magnetic Resonance (CMR) imaging serves as the gold-standard for evaluating cardiac morphology and function. Typically, a multi-view CMR stack, covering short-axis (SA) and 2/3/4-chamber long-axis (LA) views, is acquired for a thorough cardiac assessment. However, efficiently streamlining the complex, high-dimensional 3D+T CMR data and distilling compact, coherent representation remains a challenge. In this work, we introduce a whole-heart self-supervised learning framework that utilizes masked imaging modeling to automatically uncover the correlations between spatial and temporal patches throughout the cardiac stacks. This process facilitates the generation of meaningful and well-clustered heart representations without relying on the traditionally required, and often costly, labeled data. The learned heart representation can be directly used for various downstream tasks. Furthermore, our method demonstrates remarkable robustness, ensuring consistent representations even when certain CMR planes are missing/flawed. We train our model on 14,000 unlabeled CMR data from UK BioBank and evaluate it on 1,000 annotated data. The proposed method demonstrates superior performance to baselines in tasks that demand comprehensive 3D+T cardiac information, e.g. cardiac phenotype (ejection fraction and ventricle volume) prediction and multi-plane/multi-frame CMR segmentation, highlighting its effectiveness in extracting comprehensive cardiac features that are both anatomically and pathologically relevant.
Related papers
- CMRxRecon2024: A Multi-Modality, Multi-View K-Space Dataset Boosting Universal Machine Learning for Accelerated Cardiac MRI [39.0162369912624]
The CMRxRecon2024 dataset is the largest and most diverse publicly available cardiac k-space dataset.
It is acquired from 330 healthy volunteers, covering commonly used modalities, anatomical views, and acquisition trajectories in clinical cardiac MRI.
arXiv Detail & Related papers (2024-06-27T09:50:20Z) - Epicardium Prompt-guided Real-time Cardiac Ultrasound Frame-to-volume Registration [50.602074919305636]
This paper introduces a lightweight end-to-end Cardiac Ultrasound frame-to-volume Registration network, termed CU-Reg.
We use epicardium prompt-guided anatomical clues to reinforce the interaction of 2D sparse and 3D dense features, followed by a voxel-wise local-global aggregation of enhanced features.
arXiv Detail & Related papers (2024-06-20T17:47:30Z) - Cardiac Copilot: Automatic Probe Guidance for Echocardiography with World Model [66.35766658717205]
There is a severe shortage of experienced cardiac sonographers, due to the heart's complex structure and significant operational challenges.
We present a Cardiac Copilot system capable of providing real-time probe movement guidance.
The core innovation lies in proposing a data-driven world model, named Cardiac Dreamer, for representing cardiac spatial structures.
We train our model with real-world ultrasound data and corresponding probe motion from 110 routine clinical scans with 151K sample pairs by three certified sonographers.
arXiv Detail & Related papers (2024-06-19T02:42:29Z) - CoReEcho: Continuous Representation Learning for 2D+time Echocardiography Analysis [42.810247034149214]
We propose CoReEcho, a novel training framework emphasizing continuous representations tailored for direct EF regression.
CoReEcho: 1) outperforms the current state-of-the-art (SOTA) on the largest echocardiography dataset (EchoNet-Dynamic) with MAE of 3.90 & R2 of 82.44, and 2) provides robust and generalizable features that transfer more effectively in related downstream tasks.
arXiv Detail & Related papers (2024-03-15T10:18:06Z) - SAF-Net: Self-Attention Fusion Network for Myocardial Infarction
Detection using Multi-View Echocardiography [16.513495618124487]
Myocardial infarction (MI) is a severe case of coronary artery disease (CAD) and ultimately, its detection is substantial to prevent progressive damage to the myocardium.
We propose a novel view-fusion model named self-attention fusion network (SAF-Net) to detect MI from multi-view echocardiography recordings.
arXiv Detail & Related papers (2023-09-27T09:38:03Z) - CMRxRecon: An open cardiac MRI dataset for the competition of
accelerated image reconstruction [62.61209705638161]
There has been growing interest in deep learning-based CMR imaging algorithms.
Deep learning methods require large training datasets.
This dataset includes multi-contrast, multi-view, multi-slice and multi-coil CMR imaging data from 300 subjects.
arXiv Detail & Related papers (2023-09-19T15:14:42Z) - M(otion)-mode Based Prediction of Ejection Fraction using
Echocardiograms [13.112371567924802]
We propose using the M(otion)-mode of echocardiograms for estimating the left ventricular ejection fraction (EF) and classifying cardiomyopathy.
We generate multiple artificial M-mode images from a single echocardiogram and combine them using off-the-shelf model architectures.
Our experiments show that the supervised setting converges with only ten modes and is comparable to the baseline method.
arXiv Detail & Related papers (2023-09-07T15:00:58Z) - A Comprehensive 3-D Framework for Automatic Quantification of Late
Gadolinium Enhanced Cardiac Magnetic Resonance Images [5.947543669357994]
Late gadolinium enhanced (LGE) cardiac magnetic resonance (CMR) can directly visualize nonviable myocardium with hyperenhanced intensities.
For heart attack patients, it is crucial to facilitate the decision of appropriate therapy by analyzing and quantifying their LGE CMR images.
To achieve accurate quantification, LGE CMR images need to be processed in two steps: segmentation of the myocardium followed by classification of infarcts.
arXiv Detail & Related papers (2022-05-21T11:54:39Z) - A Robust Interpretable Deep Learning Classifier for Heart Anomaly
Detection Without Segmentation [37.70077538403524]
We argue the importance of heart sound segmentation as a prior step for heart sound classification.
We then propose a robust classifier for abnormal heart sound detection.
Our new classifier is also shown to be robust, stable and most importantly, explainable, with an accuracy of almost 100% on the widely used PhysioNet dataset.
arXiv Detail & Related papers (2020-05-21T06:36:28Z) - On the effectiveness of GAN generated cardiac MRIs for segmentation [12.59275199633534]
We propose a Variational Autoencoder (VAE) trained to learn the latent representations of cardiac shapes.
On the other side is a GAN that uses "SPatially-Adaptive (DE)Normalization" modules to generate realistic MR images tailored to a given anatomical map.
We show that segmentation with CNNs trained with our synthetic annotated images gets competitive results compared to traditional techniques.
arXiv Detail & Related papers (2020-05-18T18:48:38Z) - How well do U-Net-based segmentation trained on adult cardiac magnetic
resonance imaging data generalise to rare congenital heart diseases for
surgical planning? [2.330464988780586]
Planning the optimal time of intervention for pulmonary valve replacement surgery in patients with the congenital heart disease Tetralogy of Fallot (TOF) is mainly based on ventricular volume and function according to current guidelines.
In several grand challenges in the last years, U-Net architectures have shown impressive results on the provided data.
However, in clinical practice, data sets are more diverse considering individual pathologies and image properties derived from different scanner properties.
arXiv Detail & Related papers (2020-02-10T08:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.