PRO: Projection Domain Synthesis for CT Imaging
- URL: http://arxiv.org/abs/2506.13443v2
- Date: Wed, 18 Jun 2025 07:33:50 GMT
- Title: PRO: Projection Domain Synthesis for CT Imaging
- Authors: Kang Chen, Bin Huang, Xuebin Yang, Junyan Zhang, Qiegen Liu,
- Abstract summary: We present PRO, a projection domain synthesis model for CT imaging.<n>Unlike previous approaches, PRO learns rich structural representations from raw projection data.<n>Pro functions as a foundation model, capable of generalizing across diverse downstream tasks.
- Score: 11.605647208305857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesizing high quality CT projection data remains a significant challenge due to the limited availability of annotated data and the complex nature of CT imaging. In this work, we present PRO, a projection domain synthesis foundation model for CT imaging. To the best of our knowledge, this is the first study that performs CT synthesis in the projection domain. Unlike previous approaches that operate in the image domain, PRO learns rich structural representations from raw projection data and leverages anatomical text prompts for controllable synthesis. This projection domain strategy enables more faithful modeling of underlying imaging physics and anatomical structures. Moreover, PRO functions as a foundation model, capable of generalizing across diverse downstream tasks by adjusting its generative behavior via prompt inputs. Experimental results demonstrated that incorporating our synthesized data significantly improves performance across multiple downstream tasks, including low-dose and sparse-view reconstruction. These findings underscore the versatility and scalability of PRO in data generation for various CT applications. These results highlight the potential of projection domain synthesis as a powerful tool for data augmentation and robust CT imaging. Our source code is publicly available at: https://github.com/yqx7150/PRO.
Related papers
- CAPRI-CT: Causal Analysis and Predictive Reasoning for Image Quality Optimization in Computed Tomography [2.422970122886921]
CAPRI-CT is a causal-aware deep learning framework for Causal Analysis and Predictive Reasoning for Image Quality Optimization in CT imaging.<n>It integrates image data with acquisition metadata to model the underlying causal relationships that influence image quality.<n>It is trained and validated using an ensemble learning approach, achieving strong predictive performance.
arXiv Detail & Related papers (2025-07-23T11:23:02Z) - Tomographic Foundation Model -- FORCE: Flow-Oriented Reconstruction Conditioning Engine [9.228750443979733]
Deep learning has significantly advanced CT image reconstruction.<n>Deep learning methods can perform well with approximately paired data, but they inherently carry the risk of hallucination.<n>We propose a novel CT framework: Flow-Oriented Reconstruction Conditioning Engine (FORCE)
arXiv Detail & Related papers (2025-06-02T18:25:12Z) - HistoSPACE: Histology-Inspired Spatial Transcriptome Prediction And Characterization Engine [0.0]
HistoSPACE model explore the diversity of histological images available with ST data to extract molecular insights from tissue image.
Model demonstrates significant efficiency compared to contemporary algorithms, revealing a correlation of 0.56 in leave-one-out cross-validation.
arXiv Detail & Related papers (2024-08-07T07:12:52Z) - MedSyn: Text-guided Anatomy-aware Synthesis of High-Fidelity 3D CT Images [22.455833806331384]
This paper introduces an innovative methodology for producing high-quality 3D lung CT images guided by textual information.
Current state-of-the-art approaches are limited to low-resolution outputs and underutilize radiology reports' abundant information.
arXiv Detail & Related papers (2023-10-05T14:16:22Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Image Synthesis for Data Augmentation in Medical CT using Deep
Reinforcement Learning [31.677682150726383]
We show that our method bears high promise for generating novel and anatomically accurate high resolution CT images at large and diverse quantities.
Our approach is specifically designed to work with even small image datasets which is desirable given the often low amount of image data many researchers have available to them.
arXiv Detail & Related papers (2021-03-18T19:47:11Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.