Multitask 3D CBCT-to-CT Translation and Organs-at-Risk Segmentation
Using Physics-Based Data Augmentation
- URL: http://arxiv.org/abs/2103.05690v1
- Date: Tue, 9 Mar 2021 19:51:44 GMT
- Title: Multitask 3D CBCT-to-CT Translation and Organs-at-Risk Segmentation
Using Physics-Based Data Augmentation
- Authors: Navdeep Dahiya, Sadegh R Alam, Pengpeng Zhang, Si-Yuan Zhang, Anthony
Yezzi, and Saad Nadeem
- Abstract summary: In current clinical practice, noisy and artifact-ridden weekly cone-beam computed tomography (CBCT) images are only used for patient setup during radiotherapy.
Treatment planning is done once at the beginning of the treatment using high-quality planning CT (pCT) images and manual contours for organs-at-risk (OARs) structures.
If the quality of the weekly CBCT images can be improved while simultaneously segmenting OAR structures, this can provide critical information for adapting radiotherapy mid-treatment and for deriving biomarkers for treatment response.
- Score: 4.3971310109651665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose: In current clinical practice, noisy and artifact-ridden weekly
cone-beam computed tomography (CBCT) images are only used for patient setup
during radiotherapy. Treatment planning is done once at the beginning of the
treatment using high-quality planning CT (pCT) images and manual contours for
organs-at-risk (OARs) structures. If the quality of the weekly CBCT images can
be improved while simultaneously segmenting OAR structures, this can provide
critical information for adapting radiotherapy mid-treatment as well as for
deriving biomarkers for treatment response. Methods: Using a novel
physics-based data augmentation strategy, we synthesize a large dataset of
perfectly/inherently registered planning CT and synthetic-CBCT pairs for
locally advanced lung cancer patient cohort, which are then used in a multitask
3D deep learning framework to simultaneously segment and translate real weekly
CBCT images to high-quality planning CT-like images. Results: We compared the
synthetic CT and OAR segmentations generated by the model to real planning CT
and manual OAR segmentations and showed promising results. The real week 1
(baseline) CBCT images which had an average MAE of 162.77 HU compared to pCT
images are translated to synthetic CT images that exhibit a drastically
improved average MAE of 29.31 HU and average structural similarity of 92% with
the pCT images. The average DICE scores of the 3D organs-at-risk segmentations
are: lungs 0.96, heart 0.88, spinal cord 0.83 and esophagus 0.66. Conclusions:
We demonstrate an approach to translate artifact-ridden CBCT images to high
quality synthetic CT images while simultaneously generating good quality
segmentation masks for different organs-at-risk. This approach could allow
clinicians to adjust treatment plans using only the routine low-quality CBCT
images, potentially improving patient outcomes.
Related papers
- HC$^3$L-Diff: Hybrid conditional latent diffusion with high frequency enhancement for CBCT-to-CT synthesis [10.699377597641137]
We propose a novel conditional latent diffusion model for efficient CBCT-to-CT synthesis.
We employ the Unified Feature (UFE) to compress images into a low-dimensional latent space.
Our method can efficiently achieve high-quality CBCT-to-CT synthesis in only over 2 mins per patient.
arXiv Detail & Related papers (2024-11-03T14:00:12Z) - Improving Cone-Beam CT Image Quality with Knowledge Distillation-Enhanced Diffusion Model in Imbalanced Data Settings [6.157230849293829]
Daily cone-beam CT (CBCT) imaging, pivotal for therapy adjustment, falls short in tissue density accuracy.
We maximize CBCT data during therapy, complemented by sparse paired fan-beam CTs.
Our approach shows promise in generating high-quality CT images from CBCT scans in RT.
arXiv Detail & Related papers (2024-09-19T07:56:06Z) - A multi-channel cycleGAN for CBCT to CT synthesis [0.0]
Image synthesis is used to generate synthetic CTs from on-treatment cone-beam CTs (CBCTs)
Our contribution focuses on the second task, CBCT-to-sCT synthesis.
By leveraging a multi-channel input to emphasize specific image features, our approach effectively addresses some of the challenges inherent in CBCT imaging.
arXiv Detail & Related papers (2023-12-04T16:40:53Z) - Accurate Fine-Grained Segmentation of Human Anatomy in Radiographs via
Volumetric Pseudo-Labeling [66.75096111651062]
We created a large-scale dataset of 10,021 thoracic CTs with 157 labels.
We applied an ensemble of 3D anatomy segmentation models to extract anatomical pseudo-labels.
Our resulting segmentation models demonstrated remarkable performance on CXR.
arXiv Detail & Related papers (2023-06-06T18:01:08Z) - Evaluation of Synthetically Generated CT for use in Transcranial Focused
Ultrasound Procedures [5.921808547303054]
Transcranial focused ultrasound (tFUS) is a therapeutic ultrasound method that focuses sound through the skull to a small region noninvasively and often under MRI guidance.
CT imaging is used to estimate the acoustic properties that vary between individual skulls to enable effective focusing during tFUS procedures.
Here, we synthesized CT images from routinely acquired T1-weighted MRI by using a 3D patch-based conditional generative adversarial network (cGAN)
We compared the performance of sCT to real CT (rCT) images for tFUS planning using Kranion and simulations using the acoustic toolbox,
arXiv Detail & Related papers (2022-10-26T15:15:24Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - M3Lung-Sys: A Deep Learning System for Multi-Class Lung Pneumonia
Screening from CT Imaging [85.00066186644466]
We propose a Multi-task Multi-slice Deep Learning System (M3Lung-Sys) for multi-class lung pneumonia screening from CT imaging.
In addition to distinguish COVID-19 from Healthy, H1N1, and CAP cases, our M 3 Lung-Sys also be able to locate the areas of relevant lesions.
arXiv Detail & Related papers (2020-10-07T06:22:24Z) - COVIDNet-CT: A Tailored Deep Convolutional Neural Network Design for
Detection of COVID-19 Cases from Chest CT Images [75.74756992992147]
We introduce COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images.
We also introduce COVIDx-CT, a benchmark CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation.
arXiv Detail & Related papers (2020-09-08T15:49:55Z) - Generalizable Cone Beam CT Esophagus Segmentation Using Physics-Based
Data Augmentation [4.5846054721257365]
We developed a semantic physics-based data augmentation method for segmenting esophagus in planning CT (pCT) and cone-beam CT (CBCT)
191 cases with their pCT and CBCTs were used to train a modified 3D-Unet architecture with a multi-objective loss function specifically designed for soft-tissue organs such as esophagus.
Our physics-based data augmentation spans the realistic noise/artifact spectrum across patient CBCT/pCT data and can generalize well across modalities with the potential to improve the accuracy of treatment setup and response analysis.
arXiv Detail & Related papers (2020-06-28T21:12:09Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.