Generalizable Cone Beam CT Esophagus Segmentation Using Physics-Based
Data Augmentation
- URL: http://arxiv.org/abs/2006.15713v2
- Date: Sat, 30 Jan 2021 22:33:15 GMT
- Title: Generalizable Cone Beam CT Esophagus Segmentation Using Physics-Based
Data Augmentation
- Authors: Sadegh R Alam, Tianfang Li, Pengpeng Zhang, Si-Yuan Zhang, and Saad
Nadeem
- Abstract summary: We developed a semantic physics-based data augmentation method for segmenting esophagus in planning CT (pCT) and cone-beam CT (CBCT)
191 cases with their pCT and CBCTs were used to train a modified 3D-Unet architecture with a multi-objective loss function specifically designed for soft-tissue organs such as esophagus.
Our physics-based data augmentation spans the realistic noise/artifact spectrum across patient CBCT/pCT data and can generalize well across modalities with the potential to improve the accuracy of treatment setup and response analysis.
- Score: 4.5846054721257365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated segmentation of esophagus is critical in image guided/adaptive
radiotherapy of lung cancer to minimize radiation-induced toxicities such as
acute esophagitis. We developed a semantic physics-based data augmentation
method for segmenting esophagus in both planning CT (pCT) and cone-beam CT
(CBCT) using 3D convolutional neural networks. 191 cases with their pCT and
CBCTs from four independent datasets were used to train a modified 3D-Unet
architecture with a multi-objective loss function specifically designed for
soft-tissue organs such as esophagus. Scatter artifacts and noise were
extracted from week 1 CBCTs using power law adaptive histogram equalization
method and induced to the corresponding pCT followed by reconstruction using
CBCT reconstruction parameters. Moreover, we leverage physics-based artifact
induced pCTs to drive the esophagus segmentation in real weekly CBCTs.
Segmentations were evaluated using geometric Dice and Hausdorff distance as
well as dosimetrically using mean esophagus dose and D5cc. Due to the
physics-based data augmentation, our model trained just on the synthetic CBCTs
was robust and generalizable enough to also produce state-of-the-art results on
the pCTs and CBCTs, achieving 0.81 and 0.74 Dice overlap. Our physics-based
data augmentation spans the realistic noise/artifact spectrum across patient
CBCT/pCT data and can generalize well across modalities with the potential to
improve the accuracy of treatment setup and response analysis.
Related papers
- 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - Improving Cone-Beam CT Image Quality with Knowledge Distillation-Enhanced Diffusion Model in Imbalanced Data Settings [6.157230849293829]
Daily cone-beam CT (CBCT) imaging, pivotal for therapy adjustment, falls short in tissue density accuracy.
We maximize CBCT data during therapy, complemented by sparse paired fan-beam CTs.
Our approach shows promise in generating high-quality CT images from CBCT scans in RT.
arXiv Detail & Related papers (2024-09-19T07:56:06Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Energy-Guided Diffusion Model for CBCT-to-CT Synthesis [8.888473799320593]
Cone Beam CT (CBCT) plays a crucial role in Adaptive Radiation Therapy (ART) by accurately providing radiation treatment when organ anatomy changes occur.
CBCT images suffer from scatter noise and artifacts, making relying solely on CBCT for precise dose calculation and accurate tissue localization challenging.
We propose an energy-guided diffusion model (EGDiff) and conduct experiments on a chest tumor dataset to generate synthetic CT (sCT) from CBCT.
arXiv Detail & Related papers (2023-08-07T07:23:43Z) - Accurate Fine-Grained Segmentation of Human Anatomy in Radiographs via
Volumetric Pseudo-Labeling [66.75096111651062]
We created a large-scale dataset of 10,021 thoracic CTs with 157 labels.
We applied an ensemble of 3D anatomy segmentation models to extract anatomical pseudo-labels.
Our resulting segmentation models demonstrated remarkable performance on CXR.
arXiv Detail & Related papers (2023-06-06T18:01:08Z) - Comparing 3D deformations between longitudinal daily CBCT acquisitions
using CNN for head and neck radiotherapy toxicity prediction [1.8406176502821678]
The aim of this study is to demonstrate the clinical value of pre-treatment CBCT acquired daily during radiation therapy treatment for head and neck cancers.
We propose a deformable 3D classification pipeline that includes a component analyzing the Jacobian matrix of the deformation between planning CT and longitudinal CBCT.
arXiv Detail & Related papers (2023-03-07T15:07:43Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - Weakly-supervised Biomechanically-constrained CT/MRI Registration of the
Spine [72.85011943179894]
We propose a weakly-supervised deep learning framework that preserves the rigidity and the volume of each vertebra while maximizing the accuracy of the registration.
We specifically design these losses to depend only on the CT label maps since automatic vertebra segmentation in CT gives more accurate results contrary to MRI.
Our results show that adding the anatomy-aware losses increases the plausibility of the inferred transformation while keeping the accuracy untouched.
arXiv Detail & Related papers (2022-05-16T10:59:55Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Multitask 3D CBCT-to-CT Translation and Organs-at-Risk Segmentation
Using Physics-Based Data Augmentation [4.3971310109651665]
In current clinical practice, noisy and artifact-ridden weekly cone-beam computed tomography (CBCT) images are only used for patient setup during radiotherapy.
Treatment planning is done once at the beginning of the treatment using high-quality planning CT (pCT) images and manual contours for organs-at-risk (OARs) structures.
If the quality of the weekly CBCT images can be improved while simultaneously segmenting OAR structures, this can provide critical information for adapting radiotherapy mid-treatment and for deriving biomarkers for treatment response.
arXiv Detail & Related papers (2021-03-09T19:51:44Z) - Deep cross-modality (MR-CT) educed distillation learning for cone beam
CT lung tumor segmentation [3.8791511769387634]
We developed a new deep learning CBCT lung tumor segmentation method.
Key idea of our approach is to use magnetic resonance imaging (MRI) to guide a CBCT segmentation network training.
We accomplish this by training an end-to-end network comprised of unpaired domain adaptation (UDA) and cross-domain segmentation distillation networks (SDN) using unpaired CBCT and MRI datasets.
arXiv Detail & Related papers (2021-02-17T03:52:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.