TransAnaNet: Transformer-based Anatomy Change Prediction Network for Head and Neck Cancer Patient Radiotherapy
- URL: http://arxiv.org/abs/2405.05674v2
- Date: Thu, 23 May 2024 02:55:09 GMT
- Title: TransAnaNet: Transformer-based Anatomy Change Prediction Network for Head and Neck Cancer Patient Radiotherapy
- Authors: Meixu Chen, Kai Wang, Michael Dohopolski, Howard Morgan, David Sher, Jing Wang,
- Abstract summary: This study aims to assess the feasibility of using a vision-transformer (ViT) based neural network to predict RT-induced anatomic change in HNC patients.
A UNet-style ViT network was designed to learn spatial correspondence and contextual information from embedded CT, dose, CBCT01, GTVp, and GTVn image patches.
The predicted image from the proposed method yielded the best similarity to the real image (CBCT21) over pCT, CBCT01, and predicted CBCTs from other comparison models.
- Score: 6.199310532720352
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Early identification of head and neck cancer (HNC) patients who would experience significant anatomical change during radiotherapy (RT) is important to optimize patient clinical benefit and treatment resources. This study aims to assess the feasibility of using a vision-transformer (ViT) based neural network to predict RT-induced anatomic change in HNC patients. We retrospectively included 121 HNC patients treated with definitive RT/CRT. We collected the planning CT (pCT), planned dose, CBCTs acquired at the initial treatment (CBCT01) and fraction 21 (CBCT21), and primary tumor volume (GTVp) and involved nodal volume (GTVn) delineated on both pCT and CBCTs for model construction and evaluation. A UNet-style ViT network was designed to learn spatial correspondence and contextual information from embedded CT, dose, CBCT01, GTVp, and GTVn image patches. The model estimated the deformation vector field between CBCT01 and CBCT21 as the prediction of anatomic change, and deformed CBCT01 was used as the prediction of CBCT21. We also generated binary masks of GTVp, GTVn, and patient body for volumetric change evaluation. The predicted image from the proposed method yielded the best similarity to the real image (CBCT21) over pCT, CBCT01, and predicted CBCTs from other comparison models. The average MSE and SSIM between the normalized predicted CBCT to CBCT21 are 0.009 and 0.933, while the average dice coefficient between body mask, GTVp mask, and GTVn mask are 0.972, 0.792, and 0.821 respectively. The proposed method showed promising performance for predicting radiotherapy-induced anatomic change, which has the potential to assist in the decision-making of HNC Adaptive RT.
Related papers
- Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - Energy-Guided Diffusion Model for CBCT-to-CT Synthesis [8.888473799320593]
Cone Beam CT (CBCT) plays a crucial role in Adaptive Radiation Therapy (ART) by accurately providing radiation treatment when organ anatomy changes occur.
CBCT images suffer from scatter noise and artifacts, making relying solely on CBCT for precise dose calculation and accurate tissue localization challenging.
We propose an energy-guided diffusion model (EGDiff) and conduct experiments on a chest tumor dataset to generate synthetic CT (sCT) from CBCT.
arXiv Detail & Related papers (2023-08-07T07:23:43Z) - Improved Prognostic Prediction of Pancreatic Cancer Using Multi-Phase CT
by Integrating Neural Distance and Texture-Aware Transformer [37.55853672333369]
This paper proposes a novel learnable neural distance that describes the precise relationship between the tumor and vessels in CT images of different patients.
The developed risk marker was the strongest predictor of overall survival among preoperative factors.
arXiv Detail & Related papers (2023-08-01T12:46:02Z) - Comparing 3D deformations between longitudinal daily CBCT acquisitions
using CNN for head and neck radiotherapy toxicity prediction [1.8406176502821678]
The aim of this study is to demonstrate the clinical value of pre-treatment CBCT acquired daily during radiation therapy treatment for head and neck cancers.
We propose a deformable 3D classification pipeline that includes a component analyzing the Jacobian matrix of the deformation between planning CT and longitudinal CBCT.
arXiv Detail & Related papers (2023-03-07T15:07:43Z) - Recurrence-free Survival Prediction under the Guidance of Automatic
Gross Tumor Volume Segmentation for Head and Neck Cancers [8.598790229614071]
We developed an automated primary tumor (GTVp) and lymph nodes (GTVn) segmentation method.
We extracted radiomics features from the segmented tumor volume and constructed a multi-modality tumor recurrence-free survival (RFS) prediction model.
arXiv Detail & Related papers (2022-09-22T18:44:57Z) - Deformable Image Registration using Unsupervised Deep Learning for
CBCT-guided Abdominal Radiotherapy [2.142433093974999]
The purpose of this study is to propose an unsupervised deep learning based CBCT-CBCT deformable image registration.
The proposed deformable registration workflow consists of training and inference stages that share the same feed-forward path through a spatial transformation-based network (STN)
The proposed method was evaluated using 100 fractional CBCTs from 20 abdominal cancer patients in the experiments and 105 fractional CBCTs from a cohort of 21 different abdominal cancer patients in a holdout test.
arXiv Detail & Related papers (2022-08-29T15:48:50Z) - TotalSegmentator: robust segmentation of 104 anatomical structures in CT
images [48.50994220135258]
We present a deep learning segmentation model for body CT images.
The model can segment 104 anatomical structures relevant for use cases such as organ volumetry, disease characterization, and surgical or radiotherapy planning.
arXiv Detail & Related papers (2022-08-11T15:16:40Z) - COVID-Net CT-2: Enhanced Deep Neural Networks for Detection of COVID-19
from Chest CT Images Through Bigger, More Diverse Learning [70.92379567261304]
We introduce COVID-Net CT-2, enhanced deep neural networks for COVID-19 detection from chest CT images.
We leverage explainability to investigate the decision-making behaviour of COVID-Net CT-2.
Results are promising and suggest the strong potential of deep neural networks as an effective tool for computer-aided COVID-19 assessment.
arXiv Detail & Related papers (2021-01-19T03:04:09Z) - COVIDNet-CT: A Tailored Deep Convolutional Neural Network Design for
Detection of COVID-19 Cases from Chest CT Images [75.74756992992147]
We introduce COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images.
We also introduce COVIDx-CT, a benchmark CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation.
arXiv Detail & Related papers (2020-09-08T15:49:55Z) - Generalizable Cone Beam CT Esophagus Segmentation Using Physics-Based
Data Augmentation [4.5846054721257365]
We developed a semantic physics-based data augmentation method for segmenting esophagus in planning CT (pCT) and cone-beam CT (CBCT)
191 cases with their pCT and CBCTs were used to train a modified 3D-Unet architecture with a multi-objective loss function specifically designed for soft-tissue organs such as esophagus.
Our physics-based data augmentation spans the realistic noise/artifact spectrum across patient CBCT/pCT data and can generalize well across modalities with the potential to improve the accuracy of treatment setup and response analysis.
arXiv Detail & Related papers (2020-06-28T21:12:09Z) - Automated Quantification of CT Patterns Associated with COVID-19 from
Chest CT [48.785596536318884]
The proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions.
The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities.
Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States.
arXiv Detail & Related papers (2020-04-02T21:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.