PET Tracer Conversion among Brain PET via Variable Augmented Invertible
Network
- URL: http://arxiv.org/abs/2311.00735v2
- Date: Wed, 15 Nov 2023 07:28:10 GMT
- Title: PET Tracer Conversion among Brain PET via Variable Augmented Invertible
Network
- Authors: Bohui Shen, Wei Zhang, Xubiao Liu, Pengfei Yu, Shirui Jiang, Xinchong
Shi, Xiangsong Zhang, Xiaoyu Zhou, Weirui Zhang, Bingxuan Li, Qiegen Liu
- Abstract summary: A tracer conversion invertible neural network (TC-INN) for image projection is developed to map FDG images to DOPA images through deep learning.
Experimental results exhibited excellent generation capability in mapping between FDG and DOPA, suggesting that PET tracer conversion has great potential in the case of limited tracer applications.
- Score: 8.895830601854534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Positron emission tomography (PET) serves as an essential tool for diagnosis
of encephalopathy and brain science research. However, it suffers from the
limited choice of tracers. Nowadays, with the wide application of PET imaging
in neuropsychiatric treatment, 6-18F-fluoro-3, 4-dihydroxy-L-phenylalanine
(DOPA) has been found to be more effective than 18F-labeled
fluorine-2-deoxyglucose (FDG) in the field. Nevertheless, due to the complexity
of its preparation and other limitations, DOPA is far less widely used than
FDG. To address this issue, a tracer conversion invertible neural network
(TC-INN) for image projection is developed to map FDG images to DOPA images
through deep learning. More diagnostic information is obtained by generating
PET images from FDG to DOPA. Specifically, the proposed TC-INN consists of two
separate phases, one for training traceable data, the other for rebuilding new
data. The reference DOPA PET image is used as a learning target for the
corresponding network during the training process of tracer conversion.
Meanwhile, the invertible network iteratively estimates the resultant DOPA PET
data and compares it to the reference DOPA PET data. Notably, the reversible
model employs variable enhancement technique to achieve better power
generation. Moreover, image registration needs to be performed before training
due to the angular deviation of the acquired FDG and DOPA data information.
Experimental results exhibited excellent generation capability in mapping
between FDG and DOPA, suggesting that PET tracer conversion has great potential
in the case of limited tracer applications.
Related papers
- Enhancing Angular Resolution via Directionality Encoding and Geometric Constraints in Brain Diffusion Tensor Imaging [70.66500060987312]
Diffusion-weighted imaging (DWI) is a type of Magnetic Resonance Imaging (MRI) technique sensitised to the diffusivity of water molecules.
This work proposes DirGeo-DTI, a deep learning-based method to estimate reliable DTI metrics even from a set of DWIs acquired with the minimum theoretical number (6) of gradient directions.
arXiv Detail & Related papers (2024-09-11T11:12:26Z) - Diffusion Transformer Model With Compact Prior for Low-dose PET Reconstruction [7.320877150436869]
We propose a diffusion transformer model (DTM) guided by joint compact prior (JCP) to enhance the reconstruction quality of low-dose PET imaging.
DTM combines the powerful distribution mapping abilities of diffusion models with the capacity of transformers to capture long-range dependencies.
Our approach not only reduces radiation exposure risks but also provides a more reliable PET imaging tool for early disease detection and patient management.
arXiv Detail & Related papers (2024-07-01T03:54:43Z) - Two-Phase Multi-Dose-Level PET Image Reconstruction with Dose Level Awareness [43.45142393436787]
We design a novel two-phase multi-dose-level PET reconstruction algorithm with dose level awareness.
The pre-training phase is devised to explore both fine-grained discriminative features and effective semantic representation.
The SPET prediction phase adopts a coarse prediction network utilizing pre-learned dose level prior to generate preliminary result.
arXiv Detail & Related papers (2024-04-02T01:57:08Z) - Image2Points:A 3D Point-based Context Clusters GAN for High-Quality PET
Image Reconstruction [47.398304117228584]
We propose a 3D point-based context clusters GAN, namely PCC-GAN, to reconstruct high-quality SPET images from LPET.
Experiments on both clinical and phantom datasets demonstrate that our PCC-GAN outperforms the state-of-the-art reconstruction methods.
arXiv Detail & Related papers (2024-02-01T06:47:56Z) - Synthetic CT Generation via Variant Invertible Network for All-digital
Brain PET Attenuation Correction [11.402215536210337]
Attenuation correction (AC) is essential for the generation of artifact-free and quantitatively accurate positron emission tomography (PET) images.
This paper develops a PET AC method, which uses deep learning to generate continuously valued CT images from non-attenuation corrected PET images for AC on brain PET imaging.
arXiv Detail & Related papers (2023-10-03T08:38:52Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - TriDo-Former: A Triple-Domain Transformer for Direct PET Reconstruction
from Low-Dose Sinograms [45.24575167909925]
TriDoFormer is a transformer-based model that unites triple domains of sinogram, image, and frequency for direct reconstruction.
It outperforms state-of-the-art methods qualitatively and quantitatively.
GFP serves as a learnable frequency filter that adjusts the frequency components in the frequency domain, enforcing the network to restore high-frequency details.
arXiv Detail & Related papers (2023-08-10T06:20:00Z) - Self-Supervised Pre-Training for Deep Image Prior-Based Robust PET Image
Denoising [0.5999777817331317]
Deep image prior (DIP) has been successfully applied to positron emission tomography (PET) image restoration.
We propose a self-supervised pre-training model to improve the DIP-based PET image denoising performance.
arXiv Detail & Related papers (2023-02-27T06:55:00Z) - A resource-efficient deep learning framework for low-dose brain PET
image reconstruction and analysis [13.713286047709982]
We propose a resource-efficient deep learning framework for L-PET reconstruction and analysis, referred to as transGAN-SDAM.
The transGAN generates higher quality F-PET images, and then the SDAM integrates the spatial information of a sequence of generated F-PET slices to synthesize whole-brain F-PET images.
arXiv Detail & Related papers (2022-02-14T08:40:19Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.