Does prior knowledge in the form of multiple low-dose PET images (at
different dose levels) improve standard-dose PET prediction?
- URL: http://arxiv.org/abs/2202.10998v1
- Date: Tue, 22 Feb 2022 15:58:32 GMT
- Title: Does prior knowledge in the form of multiple low-dose PET images (at
different dose levels) improve standard-dose PET prediction?
- Authors: Behnoush Sanaei, Reza Faghihi, and Hossein Arabi
- Abstract summary: Deep learning methods have been introduced to predict standard PET images (S-PET) from the corresponding low-dose versions (L-PET)
In this work, we proposed to exploit the prior knowledge in the form of multiple low-dose levels of PET images to estimate the S-PET images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reducing the injected dose would result in quality degradation and loss of
information in PET imaging. To address this issue, deep learning methods have
been introduced to predict standard PET images (S-PET) from the corresponding
low-dose versions (L-PET). The existing deep learning-based denoising methods
solely rely on a single dose level of PET images to predict the S-PET images.
In this work, we proposed to exploit the prior knowledge in the form of
multiple low-dose levels of PET images (in addition to the target low-dose
level) to estimate the S-PET images.
Related papers
- S3PET: Semi-supervised Standard-dose PET Image Reconstruction via Dose-aware Token Swap [11.13611856305595]
We propose a two-stage Semi-Supervised SPET reconstruction framework, namely S3PET, to accommodate the training of abundant unpaired and limited paired SPET and LPET images.
Our S3PET involves an un-supervised pre-training stage (Stage I) to extract representations from unpaired images, and a supervised dose-aware reconstruction stage (Stage II) to achieve LPET-to-SPET reconstruction.
arXiv Detail & Related papers (2024-07-30T14:56:06Z) - HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient Tuning [55.88910947643436]
We propose a unified framework for continual learning (CL) with pre-trained models (PTMs) and parameter-efficient tuning (PET)
We present Hierarchical Decomposition PET (HiDe-PET), an innovative approach that explicitly optimize the objective through incorporating task-specific and task-shared knowledge.
Our approach demonstrates remarkably superior performance over a broad spectrum of recent strong baselines.
arXiv Detail & Related papers (2024-07-07T01:50:25Z) - Two-Phase Multi-Dose-Level PET Image Reconstruction with Dose Level Awareness [43.45142393436787]
We design a novel two-phase multi-dose-level PET reconstruction algorithm with dose level awareness.
The pre-training phase is devised to explore both fine-grained discriminative features and effective semantic representation.
The SPET prediction phase adopts a coarse prediction network utilizing pre-learned dose level prior to generate preliminary result.
arXiv Detail & Related papers (2024-04-02T01:57:08Z) - Image2Points:A 3D Point-based Context Clusters GAN for High-Quality PET
Image Reconstruction [47.398304117228584]
We propose a 3D point-based context clusters GAN, namely PCC-GAN, to reconstruct high-quality SPET images from LPET.
Experiments on both clinical and phantom datasets demonstrate that our PCC-GAN outperforms the state-of-the-art reconstruction methods.
arXiv Detail & Related papers (2024-02-01T06:47:56Z) - PET Synthesis via Self-supervised Adaptive Residual Estimation
Generative Adversarial Network [14.381830012670969]
Recent methods to generate high-quality PET images from low-dose counterparts have been reported to be state-of-the-art for low-to-high image recovery methods.
To address these issues, we developed a self-supervised adaptive residual estimation generative adversarial network (SS-AEGAN)
SS-AEGAN consistently outperformed the state-of-the-art synthesis methods with various dose reduction factors.
arXiv Detail & Related papers (2023-10-24T06:43:56Z) - Amyloid-Beta Axial Plane PET Synthesis from Structural MRI: An Image
Translation Approach for Screening Alzheimer's Disease [49.62561299282114]
An image translation model is implemented to produce synthetic amyloid-beta PET images from structural MRI that are quantitatively accurate.
We found that the synthetic PET images could be produced with a high degree of similarity to truth in terms of shape, contrast and overall high SSIM and PSNR.
arXiv Detail & Related papers (2023-09-01T16:26:42Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - CG-3DSRGAN: A classification guided 3D generative adversarial network
for image quality recovery from low-dose PET images [10.994223928445589]
High radioactivity caused by the injected tracer dose is a major concern in PET imaging.
Reducing the dose leads to inadequate image quality for diagnostic practice.
CNNs-based methods have been developed for high quality PET synthesis from its low-dose counterparts.
arXiv Detail & Related papers (2023-04-03T05:39:02Z) - Self-Supervised Pre-Training for Deep Image Prior-Based Robust PET Image
Denoising [0.5999777817331317]
Deep image prior (DIP) has been successfully applied to positron emission tomography (PET) image restoration.
We propose a self-supervised pre-training model to improve the DIP-based PET image denoising performance.
arXiv Detail & Related papers (2023-02-27T06:55:00Z) - Improving and Simplifying Pattern Exploiting Training [81.77863825517511]
Pattern Exploiting Training (PET) is a recent approach that leverages patterns for few-shot learning.
In this paper, we focus on few shot learning without any unlabeled data and introduce ADAPET.
ADAPET outperforms PET on SuperGLUE without any task-specific unlabeled data.
arXiv Detail & Related papers (2021-03-22T15:52:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.