Large-Scale Multi-Center CT and MRI Segmentation of Pancreas with Deep Learning
- URL: http://arxiv.org/abs/2405.12367v3
- Date: Fri, 25 Oct 2024 03:48:38 GMT
- Title: Large-Scale Multi-Center CT and MRI Segmentation of Pancreas with Deep Learning
- Authors: Zheyuan Zhang, Elif Keles, Gorkem Durak, Yavuz Taktak, Onkar Susladkar, Vandan Gorade, Debesh Jha, Asli C. Ormeci, Alpay Medetalibeyoglu, Lanhong Yao, Bin Wang, Ilkin Sevgi Isler, Linkai Peng, Hongyi Pan, Camila Lopes Vendrami, Amir Bourhani, Yury Velichko, Boqing Gong, Concetto Spampinato, Ayis Pyrros, Pallavi Tiwari, Derk C. F. Klatte, Megan Engels, Sanne Hoogenboom, Candice W. Bolan, Emil Agarunov, Nassier Harfouch, Chenchan Huang, Marco J. Bruno, Ivo Schoots, Rajesh N. Keswani, Frank H. Miller, Tamas Gonda, Cemal Yazici, Temel Tirkes, Baris Turkbey, Michael B. Wallace, Ulas Bagci,
- Abstract summary: Automated volumetric segmentation of the pancreas is needed for diagnosis and follow-up of pancreatic diseases.
We developed PanSegNet, combining the strengths of nnUNet and a Transformer network with a new linear attention module enabling volumetric computation.
For segmentation accuracy, we achieved Dice coefficients of 88.3% (std: 7.2%, at case level) with CT, 85.0% (std: 7.9%, at case level) with T1W MRI, and 86.3% (std: 6.4%) with T2W MRI.
- Score: 20.043497517241992
- License:
- Abstract: Automated volumetric segmentation of the pancreas on cross-sectional imaging is needed for diagnosis and follow-up of pancreatic diseases. While CT-based pancreatic segmentation is more established, MRI-based segmentation methods are understudied, largely due to a lack of publicly available datasets, benchmarking research efforts, and domain-specific deep learning methods. In this retrospective study, we collected a large dataset (767 scans from 499 participants) of T1-weighted (T1W) and T2-weighted (T2W) abdominal MRI series from five centers between March 2004 and November 2022. We also collected CT scans of 1,350 patients from publicly available sources for benchmarking purposes. We developed a new pancreas segmentation method, called PanSegNet, combining the strengths of nnUNet and a Transformer network with a new linear attention module enabling volumetric computation. We tested PanSegNet's accuracy in cross-modality (a total of 2,117 scans) and cross-center settings with Dice and Hausdorff distance (HD95) evaluation metrics. We used Cohen's kappa statistics for intra and inter-rater agreement evaluation and paired t-tests for volume and Dice comparisons, respectively. For segmentation accuracy, we achieved Dice coefficients of 88.3% (std: 7.2%, at case level) with CT, 85.0% (std: 7.9%) with T1W MRI, and 86.3% (std: 6.4%) with T2W MRI. There was a high correlation for pancreas volume prediction with R^2 of 0.91, 0.84, and 0.85 for CT, T1W, and T2W, respectively. We found moderate inter-observer (0.624 and 0.638 for T1W and T2W MRI, respectively) and high intra-observer agreement scores. All MRI data is made available at https://osf.io/kysnj/. Our source code is available at https://github.com/NUBagciLab/PaNSegNet.
Related papers
- Deep learning-based brain segmentation model performance validation with clinical radiotherapy CT [0.0]
This study validates the SynthSeg robust brain segmentation model on computed tomography (CT)
Brain segmentations from CT and MRI were obtained with SynthSeg model, a component of the Freesurfer imaging suite.
CT performance is lower than MRI based on the integrated QC scores, but low-quality segmentations can be excluded with QC-based thresholding.
arXiv Detail & Related papers (2024-06-25T09:56:30Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Simultaneous Deep Learning of Myocardium Segmentation and T2 Quantification for Acute Myocardial Infarction MRI [21.20007613833789]
We propose SQNet, a dual-task network integrating Transformer and Convolutional Neural Network (CNN) components.
SQNet features a T2-refine fusion decoder for quantitative analysis, leveraging global features from the Transformer.
A tight coupling module aligns and fuses CNN and Transformer branch features, enabling SQNet to focus on myocardium regions.
arXiv Detail & Related papers (2024-05-17T06:50:37Z) - MRSegmentator: Robust Multi-Modality Segmentation of 40 Classes in MRI and CT Sequences [4.000329151950926]
The model was trained on 1,200 manually annotated MRI scans from the UK Biobank, 221 in-house MRI scans and 1228 CT scans.
It showcased high accuracy in segmenting well-defined organs, achieving Dice Similarity Coefficient (DSC) scores of 0.97 for the right and left lungs, and 0.95 for the heart.
It also demonstrated robustness in organs like the liver (DSC: 0.96) and kidneys (DSC: 0.95 left, 0.95 right), which present more variability.
arXiv Detail & Related papers (2024-05-10T13:15:42Z) - Minimally Interactive Segmentation of Soft-Tissue Tumors on CT and MRI
using Deep Learning [0.0]
We develop a minimally interactive deep learning-based segmentation method for soft-tissue tumors (STTs) on CT and MRI.
The method requires the user to click six points near the tumor's extreme boundaries to serve as input for a Convolutional Neural Network.
arXiv Detail & Related papers (2024-02-12T16:15:28Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - Weakly-supervised Biomechanically-constrained CT/MRI Registration of the
Spine [72.85011943179894]
We propose a weakly-supervised deep learning framework that preserves the rigidity and the volume of each vertebra while maximizing the accuracy of the registration.
We specifically design these losses to depend only on the CT label maps since automatic vertebra segmentation in CT gives more accurate results contrary to MRI.
Our results show that adding the anatomy-aware losses increases the plausibility of the inferred transformation while keeping the accuracy untouched.
arXiv Detail & Related papers (2022-05-16T10:59:55Z) - Unpaired cross-modality educed distillation (CMEDL) applied to CT lung
tumor segmentation [4.409836695738518]
We develop a new crossmodality educed distillation (CMEDL) approach, using unpaired CT and MRI scans.
Our framework uses an end-to-end trained unpaired I2I translation, teacher, and student segmentation networks.
arXiv Detail & Related papers (2021-07-16T15:58:15Z) - Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-net
neural networks: a BraTS 2020 challenge solution [56.17099252139182]
We automate and standardize the task of brain tumor segmentation with U-net like neural networks.
Two independent ensembles of models were trained, and each produced a brain tumor segmentation map.
Our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff 95% of 20.4, 6.7 and 19.5mm on the final test dataset.
arXiv Detail & Related papers (2020-10-30T14:36:10Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.