Deformable Image Registration using Unsupervised Deep Learning for
CBCT-guided Abdominal Radiotherapy
- URL: http://arxiv.org/abs/2208.13686v1
- Date: Mon, 29 Aug 2022 15:48:50 GMT
- Title: Deformable Image Registration using Unsupervised Deep Learning for
CBCT-guided Abdominal Radiotherapy
- Authors: Huiqiao Xie, Yang Lei, Yabo Fu, Tonghe Wang, Justin Roper, Jeffrey D.
Bradley, Pretesh Patel, Tian Liu and Xiaofeng Yang
- Abstract summary: The purpose of this study is to propose an unsupervised deep learning based CBCT-CBCT deformable image registration.
The proposed deformable registration workflow consists of training and inference stages that share the same feed-forward path through a spatial transformation-based network (STN)
The proposed method was evaluated using 100 fractional CBCTs from 20 abdominal cancer patients in the experiments and 105 fractional CBCTs from a cohort of 21 different abdominal cancer patients in a holdout test.
- Score: 2.142433093974999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: CBCTs in image-guided radiotherapy provide crucial anatomy information for
patient setup and plan evaluation. Longitudinal CBCT image registration could
quantify the inter-fractional anatomic changes. The purpose of this study is to
propose an unsupervised deep learning based CBCT-CBCT deformable image
registration. The proposed deformable registration workflow consists of
training and inference stages that share the same feed-forward path through a
spatial transformation-based network (STN). The STN consists of a global
generative adversarial network (GlobalGAN) and a local GAN (LocalGAN) to
predict the coarse- and fine-scale motions, respectively. The network was
trained by minimizing the image similarity loss and the deformable vector field
(DVF) regularization loss without the supervision of ground truth DVFs. During
the inference stage, patches of local DVF were predicted by the trained
LocalGAN and fused to form a whole-image DVF. The local whole-image DVF was
subsequently combined with the GlobalGAN generated DVF to obtain final DVF. The
proposed method was evaluated using 100 fractional CBCTs from 20 abdominal
cancer patients in the experiments and 105 fractional CBCTs from a cohort of 21
different abdominal cancer patients in a holdout test. Qualitatively, the
registration results show great alignment between the deformed CBCT images and
the target CBCT image. Quantitatively, the average target registration error
(TRE) calculated on the fiducial markers and manually identified landmarks was
1.91+-1.11 mm. The average mean absolute error (MAE), normalized cross
correlation (NCC) between the deformed CBCT and target CBCT were 33.42+-7.48
HU, 0.94+-0.04, respectively. This promising registration method could provide
fast and accurate longitudinal CBCT alignment to facilitate inter-fractional
anatomic changes analysis and prediction.
Related papers
- KaLDeX: Kalman Filter based Linear Deformable Cross Attention for Retina Vessel Segmentation [46.57880203321858]
We propose a novel network (KaLDeX) for vascular segmentation leveraging a Kalman filter based linear deformable cross attention (LDCA) module.
Our approach is based on two key components: Kalman filter (KF) based linear deformable convolution (LD) and cross-attention (CA) modules.
The proposed method is evaluated on retinal fundus image datasets (DRIVE, CHASE_BD1, and STARE) as well as the 3mm and 6mm of the OCTA-500 dataset.
arXiv Detail & Related papers (2024-10-28T16:00:42Z) - Class-Aware Cartilage Segmentation for Autonomous US-CT Registration in Robotic Intercostal Ultrasound Imaging [39.597735935731386]
A class-aware cartilage bone segmentation network with geometry-constraint post-processing is presented to capture patient-specific rib skeletons.
A dense skeleton graph-based non-rigid registration is presented to map the intercostal scanning path from a generic template to individual patients.
Results demonstrate that the proposed graph-based registration method can robustly and precisely map the path from CT template to individual patients.
arXiv Detail & Related papers (2024-06-06T14:15:15Z) - TransAnaNet: Transformer-based Anatomy Change Prediction Network for Head and Neck Cancer Patient Radiotherapy [6.199310532720352]
This study aims to assess the feasibility of using a vision-transformer (ViT) based neural network to predict RT-induced anatomic change in HNC patients.
A UNet-style ViT network was designed to learn spatial correspondence and contextual information from embedded CT, dose, CBCT01, GTVp, and GTVn image patches.
The predicted image from the proposed method yielded the best similarity to the real image (CBCT21) over pCT, CBCT01, and predicted CBCTs from other comparison models.
arXiv Detail & Related papers (2024-05-09T11:00:06Z) - A Two-Stage Generative Model with CycleGAN and Joint Diffusion for
MRI-based Brain Tumor Detection [41.454028276986946]
We propose a novel framework Two-Stage Generative Model (TSGM) to improve brain tumor detection and segmentation.
CycleGAN is trained on unpaired data to generate abnormal images from healthy images as data prior.
VE-JP is implemented to reconstruct healthy images using synthetic paired abnormal images as a guide.
arXiv Detail & Related papers (2023-11-06T12:58:26Z) - Energy-Guided Diffusion Model for CBCT-to-CT Synthesis [8.888473799320593]
Cone Beam CT (CBCT) plays a crucial role in Adaptive Radiation Therapy (ART) by accurately providing radiation treatment when organ anatomy changes occur.
CBCT images suffer from scatter noise and artifacts, making relying solely on CBCT for precise dose calculation and accurate tissue localization challenging.
We propose an energy-guided diffusion model (EGDiff) and conduct experiments on a chest tumor dataset to generate synthetic CT (sCT) from CBCT.
arXiv Detail & Related papers (2023-08-07T07:23:43Z) - Feature-enhanced Adversarial Semi-supervised Semantic Segmentation
Network for Pulmonary Embolism Annotation [6.142272540492936]
This study established a feature-enhanced adversarial semi-supervised semantic segmentation model to automatically annotate pulmonary embolism lesion areas.
In current studies, all of the PEA image segmentation methods are trained by supervised learning.
This study proposed a semi-supervised learning method to make the model applicable to different datasets by adding a small amount of unlabeled images.
arXiv Detail & Related papers (2022-04-08T04:21:02Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Deep cross-modality (MR-CT) educed distillation learning for cone beam
CT lung tumor segmentation [3.8791511769387634]
We developed a new deep learning CBCT lung tumor segmentation method.
Key idea of our approach is to use magnetic resonance imaging (MRI) to guide a CBCT segmentation network training.
We accomplish this by training an end-to-end network comprised of unpaired domain adaptation (UDA) and cross-domain segmentation distillation networks (SDN) using unpaired CBCT and MRI datasets.
arXiv Detail & Related papers (2021-02-17T03:52:02Z) - CT Image Segmentation for Inflamed and Fibrotic Lungs Using a
Multi-Resolution Convolutional Neural Network [6.177921466996229]
The purpose of this study was to develop a fully-automated segmentation algorithm, robust to various density enhancing lung abnormalities.
A polymorphic training approach is proposed, in which both specifically labeled left and right lungs of humans with COPD, and nonspecifically labeled lungs of animals with acute lung injury, were incorporated into training a single neural network.
The resulting network is intended for predicting left and right lung regions in humans with or without diffuse opacification and consolidation.
arXiv Detail & Related papers (2020-10-16T18:25:59Z) - Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images [152.34988415258988]
Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19.
segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues.
To address these challenges, a novel COVID-19 Deep Lung Infection Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices.
arXiv Detail & Related papers (2020-04-22T07:30:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.