Hierarchical Vision Transformers for Cardiac Ejection Fraction
Estimation
- URL: http://arxiv.org/abs/2304.00177v1
- Date: Fri, 31 Mar 2023 23:42:17 GMT
- Title: Hierarchical Vision Transformers for Cardiac Ejection Fraction
Estimation
- Authors: Lhuqita Fazry, Asep Haryono, Nuzulul Khairu Nissa, Sunarno, Naufal
Muhammad Hirzi, Muhammad Febrian Rachmadi, Wisnu Jatmiko
- Abstract summary: We propose a deep learning approach, based on hierarchical vision Transformers, to estimate the ejection fraction from echocardiogram videos.
The proposed method can estimate ejection fraction without the need for left ventrice segmentation first, make it more efficient than other methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The left ventricular of ejection fraction is one of the most important metric
of cardiac function. It is used by cardiologist to identify patients who are
eligible for lifeprolonging therapies. However, the assessment of ejection
fraction suffers from inter-observer variability. To overcome this challenge,
we propose a deep learning approach, based on hierarchical vision Transformers,
to estimate the ejection fraction from echocardiogram videos. The proposed
method can estimate ejection fraction without the need for left ventrice
segmentation first, make it more efficient than other methods. We evaluated our
method on EchoNet-Dynamic dataset resulting 5.59, 7.59 and 0.59 for MAE, RMSE
and R2 respectivelly. This results are better compared to the state-of-the-art
method, Ultrasound Video Transformer (UVT). The source code is available on
https://github.com/lhfazry/UltraSwin.
Related papers
- CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Semantic-aware Temporal Channel-wise Attention for Cardiac Function
Assessment [69.02116920364311]
Existing video-based methods do not pay much attention to the left ventricular region, nor the left ventricular changes caused by motion.
We propose a semi-supervised auxiliary learning paradigm with a left ventricular segmentation task, which contributes to the representation learning for the left ventricular region.
Our approach achieves state-of-the-art performance on the Stanford dataset with an improvement of 0.22 MAE, 0.26 RMSE, and 1.9% $R2$.
arXiv Detail & Related papers (2023-10-09T05:57:01Z) - M(otion)-mode Based Prediction of Ejection Fraction using
Echocardiograms [13.112371567924802]
We propose using the M(otion)-mode of echocardiograms for estimating the left ventricular ejection fraction (EF) and classifying cardiomyopathy.
We generate multiple artificial M-mode images from a single echocardiogram and combine them using off-the-shelf model architectures.
Our experiments show that the supervised setting converges with only ten modes and is comparable to the baseline method.
arXiv Detail & Related papers (2023-09-07T15:00:58Z) - DopUS-Net: Quality-Aware Robotic Ultrasound Imaging based on Doppler
Signal [48.97719097435527]
DopUS-Net combines the Doppler images with B-mode images to increase the segmentation accuracy and robustness of small blood vessels.
An artery re-identification module qualitatively evaluate the real-time segmentation results and automatically optimize the probe pose for enhanced Doppler images.
arXiv Detail & Related papers (2023-05-15T18:19:29Z) - Bayesian Optimization of 2D Echocardiography Segmentation [2.6947715121689204]
We use BO to optimize the architectural and training-related hyper parameters of a deep convolutional neural network model.
Dice overlaps of 0.95, 0.96, and 0.93 on left ventricular (LV) endocardium, LV epicardium, and left atrium respectively.
We also observe significant improvement in derived clinical indices, including smaller median absolute errors for LV end-diastolic volume.
arXiv Detail & Related papers (2022-11-17T20:52:36Z) - D'ARTAGNAN: Counterfactual Video Generation [3.4079278794252232]
Causally-enabled machine learning frameworks could help clinicians to identify the best course of treatments by answering counterfactual questions.
We combine deep neural networks, twin causal networks and generative adversarial methods for the first time to build D'ARTAGNAN.
We generate new ultrasound videos, retaining the video style and anatomy of the original patient, with variations of the Ejection Fraction conditioned on a given input.
arXiv Detail & Related papers (2022-06-03T15:53:32Z) - Automatic Segmentation of Left Ventricle in Cardiac Magnetic Resonance
Images [0.9576327614980393]
Cardiologists often use ejection fraction to determine one's cardiac function.
We propose multiscale template matching technique for detection and an elliptical active disc for automated segmentation of the left ventricle in MR images.
arXiv Detail & Related papers (2022-01-30T13:05:35Z) - A Deep Learning Approach to Predicting Collateral Flow in Stroke
Patients Using Radiomic Features from Perfusion Images [58.17507437526425]
Collateral circulation results from specialized anastomotic channels which provide oxygenated blood to regions with compromised blood flow.
The actual grading is mostly done through manual inspection of the acquired images.
We present a deep learning approach to predicting collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data.
arXiv Detail & Related papers (2021-10-24T18:58:40Z) - Ultrasound Video Transformers for Cardiac Ejection Fraction Estimation [3.188100483042461]
We propose a novel approach to ultrasound video analysis using a Residual Auto-Encoder Network and a BERT model adapted for token classification.
We apply our model to the task of End-Systolic (ES) and End-Diastolic (ED) frame detection and the automated computation of the left ventricular ejection fraction.
Our end-to-end learnable approach can estimate the ejection fraction with a MAE of 5.95 and $R2$ of 0.52 in 0.15s per video, showing that segmentation is not the only way to predict ejection fraction.
arXiv Detail & Related papers (2021-07-02T11:23:09Z) - Neural collaborative filtering for unsupervised mitral valve
segmentation in echocardiography [60.08918310097638]
We propose an automated and unsupervised method for the mitral valve segmentation based on a low dimensional embedding of the echocardiography videos.
The method is evaluated in a collection of echocardiography videos of patients with a variety of mitral valve diseases and on an independent test cohort.
It outperforms state-of-the-art emphunsupervised and emphsupervised methods on low-quality videos or in the case of sparse annotation.
arXiv Detail & Related papers (2020-08-13T12:53:26Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.