Enhancing Synthetic CT from CBCT via Multimodal Fusion: A Study on the Impact of CBCT Quality and Alignment
- URL: http://arxiv.org/abs/2506.08716v1
- Date: Tue, 10 Jun 2025 12:02:16 GMT
- Title: Enhancing Synthetic CT from CBCT via Multimodal Fusion: A Study on the Impact of CBCT Quality and Alignment
- Authors: Maximilian Tschuchnig, Lukas Lamminger, Philipp Steininger, Michael Gadermayr,
- Abstract summary: Cone-Beam Computed Tomography (CBCT) is widely used for real-time intraoperative imaging due to its low radiation dose and high acquisition speed.<n>Despite its high resolution, CBCT suffers from significant artifacts and thereby lower visual quality, compared to conventional Computed Tomography (CT)<n>A recent approach to mitigate these artifacts is synthetic CT (sCT) generation, translating CBCT volumes into the CT domain.
- Score: 0.19999259391104385
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Cone-Beam Computed Tomography (CBCT) is widely used for real-time intraoperative imaging due to its low radiation dose and high acquisition speed. However, despite its high resolution, CBCT suffers from significant artifacts and thereby lower visual quality, compared to conventional Computed Tomography (CT). A recent approach to mitigate these artifacts is synthetic CT (sCT) generation, translating CBCT volumes into the CT domain. In this work, we enhance sCT generation through multimodal learning, integrating intraoperative CBCT with preoperative CT. Beyond validation on two real-world datasets, we use a versatile synthetic dataset, to analyze how CBCT-CT alignment and CBCT quality affect sCT quality. The results demonstrate that multimodal sCT consistently outperform unimodal baselines, with the most significant gains observed in well-aligned, low-quality CBCT-CT cases. Finally, we demonstrate that these findings are highly reproducible in real-world clinical datasets.
Related papers
- Enhancing Synthetic CT from CBCT via Multimodal Fusion and End-To-End Registration [0.19999259391104385]
Cone-Beam Computed Tomography (CBCT) is widely used for intraoperative imaging.<n>CBCT images typically suffer from artifacts and lower visual quality compared to conventional Computed Tomography (CT)
arXiv Detail & Related papers (2025-07-08T15:10:04Z) - ARTInp: CBCT-to-CT Image Inpainting and Image Translation in Radiotherapy [1.70645147263353]
ARTInp is a novel deep-learning framework combining image inpainting and CBCT-to-CT translation.<n>We trained ARTInp on a dataset of paired CBCT and CT images from the SynthRad 2023 challenge.
arXiv Detail & Related papers (2025-02-07T13:04:25Z) - Synthetic CT image generation from CBCT: A Systematic Review [44.01505745127782]
Generation of synthetic CT (sCT) images from cone-beam CT (CBCT) data using deep learning methodologies represents a significant advancement in radiation oncology.<n>A total of 35 relevant studies were identified and analyzed, revealing the prevalence of deep learning approaches in the generation of sCT.
arXiv Detail & Related papers (2025-01-22T13:54:07Z) - Multimodal Learning With Intraoperative CBCT & Variably Aligned Preoperative CT Data To Improve Segmentation [0.21847754147782888]
Cone-beam computed tomography (CBCT) is an important tool facilitating computer aided interventions.
While the degraded image quality can affect downstream segmentation, the availability of high quality, preoperative scans represents potential for improvements.
We propose a multimodal learning method that fuses roughly aligned CBCT and CT scans and investigate the effect of CBCT quality and misalignment on the final segmentation performance.
arXiv Detail & Related papers (2024-06-17T15:31:54Z) - WIA-LD2ND: Wavelet-based Image Alignment for Self-supervised Low-Dose CT Denoising [74.14134385961775]
We introduce a novel self-supervised CT image denoising method called WIA-LD2ND, only using NDCT data.
WIA-LD2ND comprises two modules: Wavelet-based Image Alignment (WIA) and Frequency-Aware Multi-scale Loss (FAM)
arXiv Detail & Related papers (2024-03-18T11:20:11Z) - A multi-channel cycleGAN for CBCT to CT synthesis [0.0]
Image synthesis is used to generate synthetic CTs from on-treatment cone-beam CTs (CBCTs)
Our contribution focuses on the second task, CBCT-to-sCT synthesis.
By leveraging a multi-channel input to emphasize specific image features, our approach effectively addresses some of the challenges inherent in CBCT imaging.
arXiv Detail & Related papers (2023-12-04T16:40:53Z) - Energy-Guided Diffusion Model for CBCT-to-CT Synthesis [8.888473799320593]
Cone Beam CT (CBCT) plays a crucial role in Adaptive Radiation Therapy (ART) by accurately providing radiation treatment when organ anatomy changes occur.
CBCT images suffer from scatter noise and artifacts, making relying solely on CBCT for precise dose calculation and accurate tissue localization challenging.
We propose an energy-guided diffusion model (EGDiff) and conduct experiments on a chest tumor dataset to generate synthetic CT (sCT) from CBCT.
arXiv Detail & Related papers (2023-08-07T07:23:43Z) - SNAF: Sparse-view CBCT Reconstruction with Neural Attenuation Fields [71.84366290195487]
We propose SNAF for sparse-view CBCT reconstruction by learning the neural attenuation fields.
Our approach achieves superior performance in terms of high reconstruction quality (30+ PSNR) with only 20 input views.
arXiv Detail & Related papers (2022-11-30T14:51:14Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Multitask 3D CBCT-to-CT Translation and Organs-at-Risk Segmentation
Using Physics-Based Data Augmentation [4.3971310109651665]
In current clinical practice, noisy and artifact-ridden weekly cone-beam computed tomography (CBCT) images are only used for patient setup during radiotherapy.
Treatment planning is done once at the beginning of the treatment using high-quality planning CT (pCT) images and manual contours for organs-at-risk (OARs) structures.
If the quality of the weekly CBCT images can be improved while simultaneously segmenting OAR structures, this can provide critical information for adapting radiotherapy mid-treatment and for deriving biomarkers for treatment response.
arXiv Detail & Related papers (2021-03-09T19:51:44Z) - COVIDNet-CT: A Tailored Deep Convolutional Neural Network Design for
Detection of COVID-19 Cases from Chest CT Images [75.74756992992147]
We introduce COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images.
We also introduce COVIDx-CT, a benchmark CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation.
arXiv Detail & Related papers (2020-09-08T15:49:55Z) - Detecting Pancreatic Ductal Adenocarcinoma in Multi-phase CT Scans via
Alignment Ensemble [77.5625174267105]
Pancreatic ductal adenocarcinoma (PDAC) is one of the most lethal cancers among the population.
Multiple phases provide more information than single phase, but they are unaligned and inhomogeneous in texture.
We suggest an ensemble of all these alignments as a promising way to boost the performance of PDAC detection.
arXiv Detail & Related papers (2020-03-18T19:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.