Enhancing Synthetic CT from CBCT via Multimodal Fusion and End-To-End Registration
- URL: http://arxiv.org/abs/2507.06067v1
- Date: Tue, 08 Jul 2025 15:10:04 GMT
- Title: Enhancing Synthetic CT from CBCT via Multimodal Fusion and End-To-End Registration
- Authors: Maximilian Tschuchnig, Lukas Lamminger, Philipp Steininger, Michael Gadermayr,
- Abstract summary: Cone-Beam Computed Tomography (CBCT) is widely used for intraoperative imaging.<n>CBCT images typically suffer from artifacts and lower visual quality compared to conventional Computed Tomography (CT)
- Score: 0.19999259391104385
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cone-Beam Computed Tomography (CBCT) is widely used for intraoperative imaging due to its rapid acquisition and low radiation dose. However, CBCT images typically suffer from artifacts and lower visual quality compared to conventional Computed Tomography (CT). A promising solution is synthetic CT (sCT) generation, where CBCT volumes are translated into the CT domain. In this work, we enhance sCT generation through multimodal learning by jointly leveraging intraoperative CBCT and preoperative CT data. To overcome the inherent misalignment between modalities, we introduce an end-to-end learnable registration module within the sCT pipeline. This model is evaluated on a controlled synthetic dataset, allowing precise manipulation of data quality and alignment parameters. Further, we validate its robustness and generalizability on two real-world clinical datasets. Experimental results demonstrate that integrating registration in multimodal sCT generation improves sCT quality, outperforming baseline multimodal methods in 79 out of 90 evaluation settings. Notably, the improvement is most significant in cases where CBCT quality is low and the preoperative CT is moderately misaligned.
Related papers
- Enhancing Synthetic CT from CBCT via Multimodal Fusion: A Study on the Impact of CBCT Quality and Alignment [0.19999259391104385]
Cone-Beam Computed Tomography (CBCT) is widely used for real-time intraoperative imaging due to its low radiation dose and high acquisition speed.<n>Despite its high resolution, CBCT suffers from significant artifacts and thereby lower visual quality, compared to conventional Computed Tomography (CT)<n>A recent approach to mitigate these artifacts is synthetic CT (sCT) generation, translating CBCT volumes into the CT domain.
arXiv Detail & Related papers (2025-06-10T12:02:16Z) - Synthetic CT image generation from CBCT: A Systematic Review [44.01505745127782]
Generation of synthetic CT (sCT) images from cone-beam CT (CBCT) data using deep learning methodologies represents a significant advancement in radiation oncology.<n>A total of 35 relevant studies were identified and analyzed, revealing the prevalence of deep learning approaches in the generation of sCT.
arXiv Detail & Related papers (2025-01-22T13:54:07Z) - Initial Study On Improving Segmentation By Combining Preoperative CT And Intraoperative CBCT Using Synthetic Data [0.21847754147782888]
Cone-beam computed tomography (CBCT) can be used to facilitate computer-assisted interventions.<n>The availability of high quality, preoperative scans offers potential for improvements.<n>We propose a multimodal learning method that fuses roughly aligned CBCT and CT scans.
arXiv Detail & Related papers (2024-12-03T09:08:38Z) - Improving Cone-Beam CT Image Quality with Knowledge Distillation-Enhanced Diffusion Model in Imbalanced Data Settings [6.157230849293829]
Daily cone-beam CT (CBCT) imaging, pivotal for therapy adjustment, falls short in tissue density accuracy.
We maximize CBCT data during therapy, complemented by sparse paired fan-beam CTs.
Our approach shows promise in generating high-quality CT images from CBCT scans in RT.
arXiv Detail & Related papers (2024-09-19T07:56:06Z) - Multimodal Learning With Intraoperative CBCT & Variably Aligned Preoperative CT Data To Improve Segmentation [0.21847754147782888]
Cone-beam computed tomography (CBCT) is an important tool facilitating computer aided interventions.
While the degraded image quality can affect downstream segmentation, the availability of high quality, preoperative scans represents potential for improvements.
We propose a multimodal learning method that fuses roughly aligned CBCT and CT scans and investigate the effect of CBCT quality and misalignment on the final segmentation performance.
arXiv Detail & Related papers (2024-06-17T15:31:54Z) - WIA-LD2ND: Wavelet-based Image Alignment for Self-supervised Low-Dose CT Denoising [74.14134385961775]
We introduce a novel self-supervised CT image denoising method called WIA-LD2ND, only using NDCT data.
WIA-LD2ND comprises two modules: Wavelet-based Image Alignment (WIA) and Frequency-Aware Multi-scale Loss (FAM)
arXiv Detail & Related papers (2024-03-18T11:20:11Z) - A multi-channel cycleGAN for CBCT to CT synthesis [0.0]
Image synthesis is used to generate synthetic CTs from on-treatment cone-beam CTs (CBCTs)
Our contribution focuses on the second task, CBCT-to-sCT synthesis.
By leveraging a multi-channel input to emphasize specific image features, our approach effectively addresses some of the challenges inherent in CBCT imaging.
arXiv Detail & Related papers (2023-12-04T16:40:53Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - COVIDNet-CT: A Tailored Deep Convolutional Neural Network Design for
Detection of COVID-19 Cases from Chest CT Images [75.74756992992147]
We introduce COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images.
We also introduce COVIDx-CT, a benchmark CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation.
arXiv Detail & Related papers (2020-09-08T15:49:55Z) - Detecting Pancreatic Ductal Adenocarcinoma in Multi-phase CT Scans via
Alignment Ensemble [77.5625174267105]
Pancreatic ductal adenocarcinoma (PDAC) is one of the most lethal cancers among the population.
Multiple phases provide more information than single phase, but they are unaligned and inhomogeneous in texture.
We suggest an ensemble of all these alignments as a promising way to boost the performance of PDAC detection.
arXiv Detail & Related papers (2020-03-18T19:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.