Unpaired cross-modality educed distillation (CMEDL) applied to CT lung
tumor segmentation
- URL: http://arxiv.org/abs/2107.07985v1
- Date: Fri, 16 Jul 2021 15:58:15 GMT
- Title: Unpaired cross-modality educed distillation (CMEDL) applied to CT lung
tumor segmentation
- Authors: Jue Jiang, Andreas Rimner, Joseph O. Deasy, and Harini Veeraraghavan
- Abstract summary: We develop a new crossmodality educed distillation (CMEDL) approach, using unpaired CT and MRI scans.
Our framework uses an end-to-end trained unpaired I2I translation, teacher, and student segmentation networks.
- Score: 4.409836695738518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate and robust segmentation of lung cancers from CTs is needed to more
accurately plan and deliver radiotherapy and to measure treatment response.
This is particularly difficult for tumors located close to mediastium, due to
low soft-tissue contrast. Therefore, we developed a new cross-modality educed
distillation (CMEDL) approach, using unpaired CT and MRI scans, whereby a
teacher MRI network guides a student CT network to extract features that signal
the difference between foreground and background. Our contribution eliminates
two requirements of distillation methods: (i) paired image sets by using an
image to image (I2I) translation and (ii) pre-training of the teacher network
with a large training set by using concurrent training of all networks. Our
framework uses an end-to-end trained unpaired I2I translation, teacher, and
student segmentation networks. Our framework can be combined with any I2I and
segmentation network. We demonstrate our framework's feasibility using 3
segmentation and 2 I2I methods. All networks were trained with 377 CT and 82
T2w MRI from different sets of patients. Ablation tests and different
strategies for incorporating MRI information into CT were performed. Accuracy
was measured using Dice similarity (DSC), surface Dice (sDSC), and Hausdorff
distance at the 95$^{th}$ percentile (HD95). The CMEDL approach was
significantly (p $<$ 0.001) more accurate than non-CMEDL methods,
quantitatively and visually. It produced the highest segmentation accuracy
(sDSC of 0.83 $\pm$ 0.16 and HD95 of 5.20 $\pm$ 6.86mm). CMEDL was also more
accurate than using either pMRI's or the combination of CT's with pMRI's for
segmentation.
Related papers
- Minimally Interactive Segmentation of Soft-Tissue Tumors on CT and MRI
using Deep Learning [0.0]
We develop a minimally interactive deep learning-based segmentation method for soft-tissue tumors (STTs) on CT and MRI.
The method requires the user to click six points near the tumor's extreme boundaries to serve as input for a Convolutional Neural Network.
arXiv Detail & Related papers (2024-02-12T16:15:28Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Self-supervised 3D anatomy segmentation using self-distilled masked
image transformer (SMIT) [2.7298989068857487]
Self-supervised learning has demonstrated success in medical image segmentation using convolutional networks.
We show our approach is more accurate and requires fewer fine tuning datasets than other pretext tasks.
arXiv Detail & Related papers (2022-05-20T17:55:14Z) - Weakly-supervised Biomechanically-constrained CT/MRI Registration of the
Spine [72.85011943179894]
We propose a weakly-supervised deep learning framework that preserves the rigidity and the volume of each vertebra while maximizing the accuracy of the registration.
We specifically design these losses to depend only on the CT label maps since automatic vertebra segmentation in CT gives more accurate results contrary to MRI.
Our results show that adding the anatomy-aware losses increases the plausibility of the inferred transformation while keeping the accuracy untouched.
arXiv Detail & Related papers (2022-05-16T10:59:55Z) - One shot PACS: Patient specific Anatomic Context and Shape prior aware
recurrent registration-segmentation of longitudinal thoracic cone beam CTs [3.3504365823045044]
Thoracic CBCTs are hard to segment because of low-tissue contrast, imaging artifacts, respiratory motion, and large treatment induced intra-thoracic anatomic changes.
We developed a novel Patient-specific Anatomic Context and prior Shape or PACS- 3D recurrent registration-segmentation network for longitudinal CBCT segmentation.
arXiv Detail & Related papers (2022-01-26T15:18:30Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Deep cross-modality (MR-CT) educed distillation learning for cone beam
CT lung tumor segmentation [3.8791511769387634]
We developed a new deep learning CBCT lung tumor segmentation method.
Key idea of our approach is to use magnetic resonance imaging (MRI) to guide a CBCT segmentation network training.
We accomplish this by training an end-to-end network comprised of unpaired domain adaptation (UDA) and cross-domain segmentation distillation networks (SDN) using unpaired CBCT and MRI datasets.
arXiv Detail & Related papers (2021-02-17T03:52:02Z) - PSIGAN: Joint probabilistic segmentation and image distribution matching
for unpaired cross-modality adaptation based MRI segmentation [4.573421102994323]
We develop a new joint probabilistic segmentation and image distribution matching generative adversarial network (PSIGAN)
Our UDA approach models the co-dependency between images and their segmentation as a joint probability distribution.
Our method achieved an overall average DSC of 0.87 on T1w and 0.90 on T2w for the abdominal organs.
arXiv Detail & Related papers (2020-07-18T16:23:02Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.