Deep cross-modality (MR-CT) educed distillation learning for cone beam
CT lung tumor segmentation
- URL: http://arxiv.org/abs/2102.08556v1
- Date: Wed, 17 Feb 2021 03:52:02 GMT
- Title: Deep cross-modality (MR-CT) educed distillation learning for cone beam
CT lung tumor segmentation
- Authors: Jue Jiang, Sadegh Riyahi Alam, Ishita Chen, Perry Zhang, Andreas
Rimner, Joseph O. Deasy, Harini Veeraraghavan
- Abstract summary: We developed a new deep learning CBCT lung tumor segmentation method.
Key idea of our approach is to use magnetic resonance imaging (MRI) to guide a CBCT segmentation network training.
We accomplish this by training an end-to-end network comprised of unpaired domain adaptation (UDA) and cross-domain segmentation distillation networks (SDN) using unpaired CBCT and MRI datasets.
- Score: 3.8791511769387634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the widespread availability of in-treatment room cone beam computed
tomography (CBCT) imaging, due to the lack of reliable segmentation methods,
CBCT is only used for gross set up corrections in lung radiotherapies. Accurate
and reliable auto-segmentation tools could potentiate volumetric response
assessment and geometry-guided adaptive radiation therapies. Therefore, we
developed a new deep learning CBCT lung tumor segmentation method. Methods: The
key idea of our approach called cross modality educed distillation (CMEDL) is
to use magnetic resonance imaging (MRI) to guide a CBCT segmentation network
training to extract more informative features during training. We accomplish
this by training an end-to-end network comprised of unpaired domain adaptation
(UDA) and cross-domain segmentation distillation networks (SDN) using unpaired
CBCT and MRI datasets. Feature distillation regularizes the student network to
extract CBCT features that match the statistical distribution of MRI features
extracted by the teacher network and obtain better differentiation of tumor
from background.} We also compared against an alternative framework that used
UDA with MR segmentation network, whereby segmentation was done on the
synthesized pseudo MRI representation. All networks were trained with 216
weekly CBCTs and 82 T2-weighted turbo spin echo MRI acquired from different
patient cohorts. Validation was done on 20 weekly CBCTs from patients not used
in training. Independent testing was done on 38 weekly CBCTs from patients not
used in training or validation. Segmentation accuracy was measured using
surface Dice similarity coefficient (SDSC) and Hausdroff distance at 95th
percentile (HD95) metrics.
Related papers
- CT-based brain ventricle segmentation via diffusion Schrödinger Bridge without target domain ground truths [0.9720086191214947]
Efficient and accurate brain ventricle segmentation from clinical CT scans is critical for emergency surgeries like ventriculostomy.
We introduce a novel uncertainty-aware ventricle segmentation technique without the need of CT segmentation ground truths.
Our method employs the diffusion Schr"odinger Bridge and an attention recurrent residual U-Net to capitalize on unpaired CT and MRI scans.
arXiv Detail & Related papers (2024-05-28T15:17:58Z) - A Two-Stage Generative Model with CycleGAN and Joint Diffusion for
MRI-based Brain Tumor Detection [41.454028276986946]
We propose a novel framework Two-Stage Generative Model (TSGM) to improve brain tumor detection and segmentation.
CycleGAN is trained on unpaired data to generate abnormal images from healthy images as data prior.
VE-JP is implemented to reconstruct healthy images using synthetic paired abnormal images as a guide.
arXiv Detail & Related papers (2023-11-06T12:58:26Z) - Learned Local Attention Maps for Synthesising Vessel Segmentations [43.314353195417326]
We present an encoder-decoder model for synthesising segmentations of the main cerebral arteries in the circle of Willis (CoW) from only T2 MRI.
It uses learned local attention maps generated by dilating the segmentation labels, which forces the network to only extract information from the T2 MRI relevant to synthesising the CoW.
arXiv Detail & Related papers (2023-08-24T15:32:27Z) - Domain Transfer Through Image-to-Image Translation for Uncertainty-Aware Prostate Cancer Classification [42.75911994044675]
We present a novel approach for unpaired image-to-image translation of prostate MRIs and an uncertainty-aware training approach for classifying clinically significant PCa.
Our approach involves a novel pipeline for translating unpaired 3.0T multi-parametric prostate MRIs to 1.5T, thereby augmenting the available training data.
Our experiments demonstrate that the proposed method significantly improves the Area Under ROC Curve (AUC) by over 20% compared to the previous work.
arXiv Detail & Related papers (2023-07-02T05:26:54Z) - Deformable Image Registration using Unsupervised Deep Learning for
CBCT-guided Abdominal Radiotherapy [2.142433093974999]
The purpose of this study is to propose an unsupervised deep learning based CBCT-CBCT deformable image registration.
The proposed deformable registration workflow consists of training and inference stages that share the same feed-forward path through a spatial transformation-based network (STN)
The proposed method was evaluated using 100 fractional CBCTs from 20 abdominal cancer patients in the experiments and 105 fractional CBCTs from a cohort of 21 different abdominal cancer patients in a holdout test.
arXiv Detail & Related papers (2022-08-29T15:48:50Z) - Weakly-supervised Biomechanically-constrained CT/MRI Registration of the
Spine [72.85011943179894]
We propose a weakly-supervised deep learning framework that preserves the rigidity and the volume of each vertebra while maximizing the accuracy of the registration.
We specifically design these losses to depend only on the CT label maps since automatic vertebra segmentation in CT gives more accurate results contrary to MRI.
Our results show that adding the anatomy-aware losses increases the plausibility of the inferred transformation while keeping the accuracy untouched.
arXiv Detail & Related papers (2022-05-16T10:59:55Z) - Joint Liver and Hepatic Lesion Segmentation in MRI using a Hybrid CNN
with Transformer Layers [2.055026516354464]
This work presents a hybrid network called SWTR-Unet, consisting of a pretrained ResNet, transformer blocks as well as a common Unet-style decoder path.
With Dice scores of averaged 98+-2% for liver and 81+-28% lesion segmentation on the MRI dataset and 97+-2% and 79+-25%, respectively on the CT dataset, the proposed SWTR-Unet proved to be a precise approach for liver and hepatic lesion segmentation.
arXiv Detail & Related papers (2022-01-26T14:52:23Z) - Unpaired cross-modality educed distillation (CMEDL) applied to CT lung
tumor segmentation [4.409836695738518]
We develop a new crossmodality educed distillation (CMEDL) approach, using unpaired CT and MRI scans.
Our framework uses an end-to-end trained unpaired I2I translation, teacher, and student segmentation networks.
arXiv Detail & Related papers (2021-07-16T15:58:15Z) - Generalizable Cone Beam CT Esophagus Segmentation Using Physics-Based
Data Augmentation [4.5846054721257365]
We developed a semantic physics-based data augmentation method for segmenting esophagus in planning CT (pCT) and cone-beam CT (CBCT)
191 cases with their pCT and CBCTs were used to train a modified 3D-Unet architecture with a multi-objective loss function specifically designed for soft-tissue organs such as esophagus.
Our physics-based data augmentation spans the realistic noise/artifact spectrum across patient CBCT/pCT data and can generalize well across modalities with the potential to improve the accuracy of treatment setup and response analysis.
arXiv Detail & Related papers (2020-06-28T21:12:09Z) - Cardiac Segmentation on Late Gadolinium Enhancement MRI: A Benchmark
Study from Multi-Sequence Cardiac MR Segmentation Challenge [43.01944884184009]
This paper presents the selective results from the Multi-Sequence MR (MS-CMR) challenge, in conjunction with MII 2019.
It was aimed to develop new algorithms, as well as benchmark existing ones for LGE CMR segmentation and compare them objectively.
The success of these methods was mainly attributed to the inclusion of auxiliary sequences from the MS-CMR images.
arXiv Detail & Related papers (2020-06-22T17:04:38Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.