Minimally Interactive Segmentation of Soft-Tissue Tumors on CT and MRI
using Deep Learning
- URL: http://arxiv.org/abs/2402.07746v1
- Date: Mon, 12 Feb 2024 16:15:28 GMT
- Title: Minimally Interactive Segmentation of Soft-Tissue Tumors on CT and MRI
using Deep Learning
- Authors: Douwe J. Spaanderman (1), Martijn P. A. Starmans (1), Gonnie C. M. van
Erp (1), David F. Hanff (1), Judith H. Sluijter (1), Anne-Rose W. Schut (2
and 3), Geert J. L. H. van Leenders (4), Cornelis Verhoef (2), Dirk J.
Grunhagen (2), Wiro J. Niessen (5), Jacob J. Visser (1), Stefan Klein (1)
((1) Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the
Netherlands, (2) Department of Surgical Oncology, Erasmus MC Cancer
Institute, Rotterdam, the Netherlands, (3) Department of Medical Oncology,
Erasmus MC Cancer Institute, Rotterdam, the Netherlands, (4) Department of
Pathology, Erasmus MC Cancer Institute, Rotterdam, the Netherlands, (5)
Faculty of Medical Sciences, University of Groningen, Groningen, The
Netherlands)
- Abstract summary: We develop a minimally interactive deep learning-based segmentation method for soft-tissue tumors (STTs) on CT and MRI.
The method requires the user to click six points near the tumor's extreme boundaries to serve as input for a Convolutional Neural Network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Segmentations are crucial in medical imaging to obtain morphological,
volumetric, and radiomics biomarkers. Manual segmentation is accurate but not
feasible in the radiologist's clinical workflow, while automatic segmentation
generally obtains sub-par performance. We therefore developed a minimally
interactive deep learning-based segmentation method for soft-tissue tumors
(STTs) on CT and MRI. The method requires the user to click six points near the
tumor's extreme boundaries. These six points are transformed into a distance
map and serve, with the image, as input for a Convolutional Neural Network. For
training and validation, a multicenter dataset containing 514 patients and nine
STT types in seven anatomical locations was used, resulting in a Dice
Similarity Coefficient (DSC) of 0.85$\pm$0.11 (mean $\pm$ standard deviation
(SD)) for CT and 0.84$\pm$0.12 for T1-weighted MRI, when compared to manual
segmentations made by expert radiologists. Next, the method was externally
validated on a dataset including five unseen STT phenotypes in extremities,
achieving 0.81$\pm$0.08 for CT, 0.84$\pm$0.09 for T1-weighted MRI, and
0.88\pm0.08 for previously unseen T2-weighted fat-saturated (FS) MRI. In
conclusion, our minimally interactive segmentation method effectively segments
different types of STTs on CT and MRI, with robust generalization to previously
unseen phenotypes and imaging modalities.
Related papers
- TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - Large-Scale Multi-Center CT and MRI Segmentation of Pancreas with Deep Learning [20.043497517241992]
Automated volumetric segmentation of the pancreas is needed for diagnosis and follow-up of pancreatic diseases.
We developed PanSegNet, combining the strengths of nnUNet and a Transformer network with a new linear attention module enabling volumetric computation.
For segmentation accuracy, we achieved Dice coefficients of 88.3% (std: 7.2%, at case level) with CT, 85.0% (std: 7.9%, at case level) with T1W MRI, and 86.3% (std: 6.4%) with T2W MRI.
arXiv Detail & Related papers (2024-05-20T20:37:27Z) - MRSegmentator: Robust Multi-Modality Segmentation of 40 Classes in MRI and CT Sequences [4.000329151950926]
The model was trained on 1,200 manually annotated MRI scans from the UK Biobank, 221 in-house MRI scans and 1228 CT scans.
It showcased high accuracy in segmenting well-defined organs, achieving Dice Similarity Coefficient (DSC) scores of 0.97 for the right and left lungs, and 0.95 for the heart.
It also demonstrated robustness in organs like the liver (DSC: 0.96) and kidneys (DSC: 0.95 left, 0.95 right), which present more variability.
arXiv Detail & Related papers (2024-05-10T13:15:42Z) - Learned Local Attention Maps for Synthesising Vessel Segmentations [43.314353195417326]
We present an encoder-decoder model for synthesising segmentations of the main cerebral arteries in the circle of Willis (CoW) from only T2 MRI.
It uses learned local attention maps generated by dilating the segmentation labels, which forces the network to only extract information from the T2 MRI relevant to synthesising the CoW.
arXiv Detail & Related papers (2023-08-24T15:32:27Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - Weakly-supervised Biomechanically-constrained CT/MRI Registration of the
Spine [72.85011943179894]
We propose a weakly-supervised deep learning framework that preserves the rigidity and the volume of each vertebra while maximizing the accuracy of the registration.
We specifically design these losses to depend only on the CT label maps since automatic vertebra segmentation in CT gives more accurate results contrary to MRI.
Our results show that adding the anatomy-aware losses increases the plausibility of the inferred transformation while keeping the accuracy untouched.
arXiv Detail & Related papers (2022-05-16T10:59:55Z) - One shot PACS: Patient specific Anatomic Context and Shape prior aware
recurrent registration-segmentation of longitudinal thoracic cone beam CTs [3.3504365823045044]
Thoracic CBCTs are hard to segment because of low-tissue contrast, imaging artifacts, respiratory motion, and large treatment induced intra-thoracic anatomic changes.
We developed a novel Patient-specific Anatomic Context and prior Shape or PACS- 3D recurrent registration-segmentation network for longitudinal CBCT segmentation.
arXiv Detail & Related papers (2022-01-26T15:18:30Z) - Unpaired cross-modality educed distillation (CMEDL) applied to CT lung
tumor segmentation [4.409836695738518]
We develop a new crossmodality educed distillation (CMEDL) approach, using unpaired CT and MRI scans.
Our framework uses an end-to-end trained unpaired I2I translation, teacher, and student segmentation networks.
arXiv Detail & Related papers (2021-07-16T15:58:15Z) - PSIGAN: Joint probabilistic segmentation and image distribution matching
for unpaired cross-modality adaptation based MRI segmentation [4.573421102994323]
We develop a new joint probabilistic segmentation and image distribution matching generative adversarial network (PSIGAN)
Our UDA approach models the co-dependency between images and their segmentation as a joint probability distribution.
Our method achieved an overall average DSC of 0.87 on T1w and 0.90 on T2w for the abdominal organs.
arXiv Detail & Related papers (2020-07-18T16:23:02Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.