Flow Matching for Conditional MRI-CT and CBCT-CT Image Synthesis
- URL: http://arxiv.org/abs/2510.04823v1
- Date: Mon, 06 Oct 2025 14:07:03 GMT
- Title: Flow Matching for Conditional MRI-CT and CBCT-CT Image Synthesis
- Authors: Arnela Hadzic, Simon Johannes Joham, Martin Urschler,
- Abstract summary: Flow Matching framework is used to generate synthetic CT from MRI or CBCT images.<n>We train models for MRI sCT and CBCT sCT across three anatomical regions.<n>The results indicate that the method accurately reconstructs global anatomical structures.<n>Future work will explore patch-based training and latent-space flow models to improve resolution and local structural fidelity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Generating synthetic CT (sCT) from MRI or CBCT plays a crucial role in enabling MRI-only and CBCT-based adaptive radiotherapy, improving treatment precision while reducing patient radiation exposure. To address this task, we adopt a fully 3D Flow Matching (FM) framework, motivated by recent work demonstrating FM's efficiency in producing high-quality images. In our approach, a Gaussian noise volume is transformed into an sCT image by integrating a learned FM velocity field, conditioned on features extracted from the input MRI or CBCT using a lightweight 3D encoder. We evaluated the method on the SynthRAD2025 Challenge benchmark, training separate models for MRI $\rightarrow$ sCT and CBCT $\rightarrow$ sCT across three anatomical regions: abdomen, head and neck, and thorax. Validation and testing were performed through the challenge submission system. The results indicate that the method accurately reconstructs global anatomical structures; however, preservation of fine details was limited, primarily due to the relatively low training resolution imposed by memory and runtime constraints. Future work will explore patch-based training and latent-space flow models to improve resolution and local structural fidelity.
Related papers
- EqDiff-CT: Equivariant Conditional Diffusion model for CT Image Synthesis from CBCT [43.92108185590778]
Cone-beam computed tomography (CBCT) is widely used for imageguided radiotherapy (IGRT)<n>We propose a novel diffusion-based conditional generative model, coined EqDiff-CT, to synthesize high-quality CT images from CBCT.
arXiv Detail & Related papers (2025-09-26T05:51:59Z) - 3D Wavelet Latent Diffusion Model for Whole-Body MR-to-CT Modality Translation [13.252652406393205]
Existing MR-to-CT methods for whole-body imaging often suffer from poor spatial alignment between the generated CT and input MR images.<n>We present a novel 3D Wavelet Latent Diffusion Model (3D-WLDM) that addresses these limitations.<n>By incorporating a Wavelet Residual Module into the encoder-decoder architecture, we enhance the capture and reconstruction of fine-scale features across image and latent spaces.
arXiv Detail & Related papers (2025-07-14T06:17:05Z) - JSover: Joint Spectrum Estimation and Multi-Material Decomposition from Single-Energy CT Projections [45.14515691206885]
Multi-material decomposition (MMD) enables quantitative reconstruction of tissue compositions in the human body.<n>Traditional MMD typically requires spectral CT scanners and pre-measured X-ray energy spectra, significantly limiting clinical applicability.<n>This paper proposes JSover, a fundamentally reformulated one-step SEMMD framework that jointly reconstructs multi-material compositions and estimates the energy spectrum directly from SECT projections.
arXiv Detail & Related papers (2025-05-12T23:32:21Z) - ZECO: ZeroFusion Guided 3D MRI Conditional Generation [11.645873358288648]
ZECO is a ZeroFusion guided 3D MRI conditional generation framework.<n>It extracts, compresses, and generates high-fidelity MRI images with corresponding 3D segmentation masks.<n>ZECO outperforms state-of-the-art models in both quantitative and qualitative evaluations on Brain MRI datasets.
arXiv Detail & Related papers (2025-03-24T00:04:52Z) - Synthetic CT image generation from CBCT: A Systematic Review [44.01505745127782]
Generation of synthetic CT (sCT) images from cone-beam CT (CBCT) data using deep learning methodologies represents a significant advancement in radiation oncology.<n>A total of 35 relevant studies were identified and analyzed, revealing the prevalence of deep learning approaches in the generation of sCT.
arXiv Detail & Related papers (2025-01-22T13:54:07Z) - Unsupervised Multi-Parameter Inverse Solving for Reducing Ring Artifacts in 3D X-Ray CBCT [51.95884144860506]
Ring artifacts are prevalent in 3D cone-beam computed tomography (CBCT)<n>Existing state-of-the-art (SOTA) ring artifact reduction (RAR) methods rely on supervised learning with large-scale paired CT datasets.<n>In this work, we propose Riner, a new unsupervised RAR method.
arXiv Detail & Related papers (2024-12-08T08:22:58Z) - DiffuX2CT: Diffusion Learning to Reconstruct CT Images from Biplanar X-Rays [41.393567374399524]
We propose DiffuX2CT, which models CT reconstruction from ultra-sparse X-rays as a conditional diffusion process.
By doing so, DiffuX2CT achieves structure-controllable reconstruction, which enables 3D structural information to be recovered from 2D X-rays.
As an extra contribution, we collect a real-world lumbar CT dataset, called LumbarV, as a new benchmark to verify the clinical significance and performance of CT reconstruction from X-rays.
arXiv Detail & Related papers (2024-07-18T14:20:04Z) - UMedNeRF: Uncertainty-aware Single View Volumetric Rendering for Medical
Neural Radiance Fields [38.62191342903111]
We propose an Uncertainty-aware MedNeRF (UMedNeRF) network based on generated radiation fields.
We show the results of CT projection rendering with a single X-ray and compare our method with other methods based on generated radiation fields.
arXiv Detail & Related papers (2023-11-10T02:47:15Z) - Synthetic CT Generation from MRI using 3D Transformer-based Denoising
Diffusion Model [2.232713445482175]
Magnetic resonance imaging (MRI)-based synthetic computed tomography (sCT) simplifies radiation therapy treatment planning.
We propose an MRI-to-CT transformer-based denoising diffusion probabilistic model (MC-DDPM) to transform MRI into high-quality sCT.
arXiv Detail & Related papers (2023-05-31T00:32:00Z) - Joint Rigid Motion Correction and Sparse-View CT via Self-Calibrating
Neural Field [37.86878619100209]
NeRF has widely received attention in Sparse-View (SV) CT reconstruction problems as a self-supervised deep learning framework.
Existing NeRF-based SVCT methods strictly suppose there is completely no relative motion during the CT acquisition.
This work proposes a self-calibrating neural field that recovers the artifacts-free image from the rigid motion-corrupted SV measurement.
arXiv Detail & Related papers (2022-10-23T13:55:07Z) - Synthetic CT Skull Generation for Transcranial MR Imaging-Guided Focused
Ultrasound Interventions with Conditional Adversarial Networks [5.921808547303054]
Transcranial MRI-guided focused ultrasound (TcMRgFUS) is a therapeutic ultrasound method that focuses sound through the skull to a small region noninvasively under MRI guidance.
To accurately target ultrasound through the skull, the transmitted waves must constructively interfere at the target region.
arXiv Detail & Related papers (2022-02-21T11:34:29Z) - Frequency-Supervised MR-to-CT Image Synthesis [23.47506325756089]
This paper strives to generate a synthetic computed tomography (CT) image from a magnetic resonance (MR) image.
We find that all existing approaches share a common limitation: reconstruction breaks down in and around the high-frequency parts of CT images.
We introduce frequency-supervised deep networks to explicitly enhance high-frequency MR-to-CT image reconstruction.
arXiv Detail & Related papers (2021-07-19T15:18:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.