Simultaneous Deep Learning of Myocardium Segmentation and T2 Quantification for Acute Myocardial Infarction MRI
- URL: http://arxiv.org/abs/2405.10570v3
- Date: Wed, 29 May 2024 09:36:34 GMT
- Title: Simultaneous Deep Learning of Myocardium Segmentation and T2 Quantification for Acute Myocardial Infarction MRI
- Authors: Yirong Zhou, Chengyan Wang, Mengtian Lu, Kunyuan Guo, Zi Wang, Dan Ruan, Rui Guo, Peijun Zhao, Jianhua Wang, Naiming Wu, Jianzhong Lin, Yinyin Chen, Hang Jin, Lianxin Xie, Lilan Wu, Liuhong Zhu, Jianjun Zhou, Congbo Cai, He Wang, Xiaobo Qu,
- Abstract summary: We propose SQNet, a dual-task network integrating Transformer and Convolutional Neural Network (CNN) components.
SQNet features a T2-refine fusion decoder for quantitative analysis, leveraging global features from the Transformer.
A tight coupling module aligns and fuses CNN and Transformer branch features, enabling SQNet to focus on myocardium regions.
- Score: 21.20007613833789
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In cardiac Magnetic Resonance Imaging (MRI) analysis, simultaneous myocardial segmentation and T2 quantification are crucial for assessing myocardial pathologies. Existing methods often address these tasks separately, limiting their synergistic potential. To address this, we propose SQNet, a dual-task network integrating Transformer and Convolutional Neural Network (CNN) components. SQNet features a T2-refine fusion decoder for quantitative analysis, leveraging global features from the Transformer, and a segmentation decoder with multiple local region supervision for enhanced accuracy. A tight coupling module aligns and fuses CNN and Transformer branch features, enabling SQNet to focus on myocardium regions. Evaluation on healthy controls (HC) and acute myocardial infarction patients (AMI) demonstrates superior segmentation dice scores (89.3/89.2) compared to state-of-the-art methods (87.7/87.9). T2 quantification yields strong linear correlations (Pearson coefficients: 0.84/0.93) with label values for HC/AMI, indicating accurate mapping. Radiologist evaluations confirm SQNet's superior image quality scores (4.60/4.58 for segmentation, 4.32/4.42 for T2 quantification) over state-of-the-art methods (4.50/4.44 for segmentation, 3.59/4.37 for T2 quantification). SQNet thus offers accurate simultaneous segmentation and quantification, enhancing cardiac disease diagnosis, such as AMI.
Related papers
- Multi-Source and Multi-Sequence Myocardial Pathology Segmentation Using a Cascading Refinement CNN [0.49923266458151416]
We propose the Multi-Sequence Cascading Refinement CNN (MS-CaRe-CNN) to generate semantic segmentations to assess the viability of myocardial tissue.
MS-CaRe-CNN is well-suited to generate semantic segmentations to assess the viability of myocardial tissue, enabling downstream tasks like personalized therapy planning.
arXiv Detail & Related papers (2024-09-19T14:01:15Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - Large-Scale Multi-Center CT and MRI Segmentation of Pancreas with Deep Learning [20.043497517241992]
Automated volumetric segmentation of the pancreas is needed for diagnosis and follow-up of pancreatic diseases.
We developed PanSegNet, combining the strengths of nnUNet and a Transformer network with a new linear attention module enabling volumetric computation.
For segmentation accuracy, we achieved Dice coefficients of 88.3% (std: 7.2%, at case level) with CT, 85.0% (std: 7.9%, at case level) with T1W MRI, and 86.3% (std: 6.4%) with T2W MRI.
arXiv Detail & Related papers (2024-05-20T20:37:27Z) - Error correcting 2D-3D cascaded network for myocardial infarct scar
segmentation on late gadolinium enhancement cardiac magnetic resonance images [0.0]
We propose a cascaded framework of two-dimensional and three-dimensional convolutional neural networks (CNNs) which enables to calculate the extent of myocardial infarction in a fully automated way.
The proposed method was trained and evaluated in a five-fold cross validation using the training dataset from the EMIDEC challenge.
arXiv Detail & Related papers (2023-06-26T14:21:18Z) - Segmentation of Aortic Vessel Tree in CT Scans with Deep Fully
Convolutional Networks [4.062948258086793]
Automatic and accurate segmentation of aortic vessel tree (AVT) in computed tomography (CT) scans is crucial for early detection, diagnosis and prognosis of aortic diseases.
We use two-stage fully convolutional networks (FCNs) to automatically segment AVT in scans from multiple centers.
arXiv Detail & Related papers (2023-05-16T22:24:01Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - CNN-based fully automatic wrist cartilage volume quantification in MR
Image [55.41644538483948]
The U-net convolutional neural network with additional attention layers provides the best wrist cartilage segmentation performance.
The error of cartilage volume measurement should be assessed independently using a non-MRI method.
arXiv Detail & Related papers (2022-06-22T14:19:06Z) - Weakly-supervised Biomechanically-constrained CT/MRI Registration of the
Spine [72.85011943179894]
We propose a weakly-supervised deep learning framework that preserves the rigidity and the volume of each vertebra while maximizing the accuracy of the registration.
We specifically design these losses to depend only on the CT label maps since automatic vertebra segmentation in CT gives more accurate results contrary to MRI.
Our results show that adding the anatomy-aware losses increases the plausibility of the inferred transformation while keeping the accuracy untouched.
arXiv Detail & Related papers (2022-05-16T10:59:55Z) - Unpaired cross-modality educed distillation (CMEDL) applied to CT lung
tumor segmentation [4.409836695738518]
We develop a new crossmodality educed distillation (CMEDL) approach, using unpaired CT and MRI scans.
Our framework uses an end-to-end trained unpaired I2I translation, teacher, and student segmentation networks.
arXiv Detail & Related papers (2021-07-16T15:58:15Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.