PET Tracer Separation Using Conditional Diffusion Transformer with Multi-latent Space Learning
- URL: http://arxiv.org/abs/2506.16934v1
- Date: Fri, 20 Jun 2025 11:55:35 GMT
- Title: PET Tracer Separation Using Conditional Diffusion Transformer with Multi-latent Space Learning
- Authors: Bin Huang, Feihong Xu, Xinchong Shi, Shan Huang, Binxuan Li, Fei Li, Qiegen Liu,
- Abstract summary: A texture conditional diffusion transformer model (MS-CDT) is proposed for PET tracer separation.<n>To the best of our knowledge, this is the first attempt to use texture condition and multi-latent space for tracer separation in PET imaging.<n> Experimental results indicate that MS-CDT achieved competitive performance in terms of image quality and preservation of clinically relevant information.
- Score: 15.40743587518343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In clinical practice, single-radiotracer positron emission tomography (PET) is commonly used for imaging. Although multi-tracer PET imaging can provide supplementary information of radiotracers that are sensitive to physiological function changes, enabling a more comprehensive characterization of physiological and pathological states, the gamma-photon pairs generated by positron annihilation reactions of different tracers in PET imaging have the same energy, making it difficult to distinguish the tracer signals. In this study, a multi-latent space guided texture conditional diffusion transformer model (MS-CDT) is proposed for PET tracer separation. To the best of our knowledge, this is the first attempt to use texture condition and multi-latent space for tracer separation in PET imaging. The proposed model integrates diffusion and transformer architectures into a unified optimization framework, with the novel addition of texture masks as conditional inputs to enhance image details. By leveraging multi-latent space prior derived from different tracers, the model captures multi-level feature representations, aiming to balance computational efficiency and detail preservation. The texture masks, serving as conditional guidance, help the model focus on salient structural patterns, thereby improving the extraction and utilization of fine-grained image textures. When combined with the diffusion transformer backbone, this conditioning mechanism contributes to more accurate and robust tracer separation. To evaluate its effectiveness, the proposed MS-CDT is compared with several advanced methods on two types of 3D PET datasets: brain and chest scans. Experimental results indicate that MS-CDT achieved competitive performance in terms of image quality and preservation of clinically relevant information. Code is available at: https://github.com/yqx7150/MS-CDT.
Related papers
- PET Image Reconstruction Using Deep Diffusion Image Prior [3.1878756384085936]
We propose an anatomical prior-guided PET image reconstruction method based on diffusion models.<n>The proposed method alternated between diffusion sampling and model fine-tuning guided by the PET sinogram.<n>Experiment results show that the proposed PET reconstruction method can generalize robustly across tracer distributions and scanner types.
arXiv Detail & Related papers (2025-07-20T18:25:29Z) - Personalized MR-Informed Diffusion Models for 3D PET Image Reconstruction [44.89560992517543]
We propose a simple method for generating subject-specific PET images from a dataset of PET-MR scans.<n>The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features.<n>With simulated and real [$18$F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific "pseudo-PET" images improves reconstruction accuracy with low-count data.
arXiv Detail & Related papers (2025-06-04T10:24:14Z) - st-DTPM: Spatial-Temporal Guided Diffusion Transformer Probabilistic Model for Delayed Scan PET Image Prediction [6.812140684449128]
Several studies have indicated that dual-time PET imaging holds promise in distinguishing between malignant and benign tumor processes.
We propose a novel spatial-temporal guided diffusion transformer probabilistic model (st-DTPM) to solve dual-time PET imaging prediction problem.
arXiv Detail & Related papers (2024-10-30T06:37:55Z) - Diffusion Transformer Model With Compact Prior for Low-dose PET Reconstruction [7.320877150436869]
We propose a diffusion transformer model (DTM) guided by joint compact prior (JCP) to enhance the reconstruction quality of low-dose PET imaging.
DTM combines the powerful distribution mapping abilities of diffusion models with the capacity of transformers to capture long-range dependencies.
Our approach not only reduces radiation exposure risks but also provides a more reliable PET imaging tool for early disease detection and patient management.
arXiv Detail & Related papers (2024-07-01T03:54:43Z) - PET Tracer Conversion among Brain PET via Variable Augmented Invertible
Network [8.895830601854534]
A tracer conversion invertible neural network (TC-INN) for image projection is developed to map FDG images to DOPA images through deep learning.
Experimental results exhibited excellent generation capability in mapping between FDG and DOPA, suggesting that PET tracer conversion has great potential in the case of limited tracer applications.
arXiv Detail & Related papers (2023-11-01T12:04:33Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Affinity Feature Strengthening for Accurate, Complete and Robust Vessel
Segmentation [48.638327652506284]
Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms.
We present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach.
arXiv Detail & Related papers (2022-11-12T05:39:17Z) - Hyper-Connected Transformer Network for Multi-Modality PET-CT
Segmentation [16.999643199612244]
Co-learning complementary PET-CT imaging features is a fundamental requirement for automatic tumor segmentation.
We propose a hyper-connected transformer network that integrates a transformer network (TN) with a hyper connected fusion for multi-modality PET-CT images.
Our results with two clinical datasets show that HCT achieved better performance in segmentation accuracy when compared to the existing methods.
arXiv Detail & Related papers (2022-10-28T00:03:43Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.