Cycle-Constrained Adversarial Denoising Convolutional Network for PET Image Denoising: Multi-Dimensional Validation on Large Datasets with Reader Study and Real Low-Dose Data
- URL: http://arxiv.org/abs/2410.23628v1
- Date: Thu, 31 Oct 2024 04:34:28 GMT
- Title: Cycle-Constrained Adversarial Denoising Convolutional Network for PET Image Denoising: Multi-Dimensional Validation on Large Datasets with Reader Study and Real Low-Dose Data
- Authors: Yucun Hou, Fenglin Zhan, Xin Cheng, Chenxi Li, Ziquan Yuan, Runze Liao, Haihao Wang, Jianlang Hua, Jing Wu, Jianyong Jiang,
- Abstract summary: We propose a Cycle-versa Adrial Denoising Convolutional Network (Cycle-DCN) to reconstruct full-dose-quality images from low-dose scans.
Experiments were conducted on a large dataset consisting of raw PET brain data from 1,224 patients.
Cycle-DCN significantly improves average Peak Signal-to-Noise Ratio (PSNR), SSIM, and Normalized Root Mean Square Error (NRMSE) across three dose levels.
- Score: 9.160782425067712
- License:
- Abstract: Positron emission tomography (PET) is a critical tool for diagnosing tumors and neurological disorders but poses radiation risks to patients, particularly to sensitive populations. While reducing injected radiation dose mitigates this risk, it often compromises image quality. To reconstruct full-dose-quality images from low-dose scans, we propose a Cycle-constrained Adversarial Denoising Convolutional Network (Cycle-DCN). This model integrates a noise predictor, two discriminators, and a consistency network, and is optimized using a combination of supervised loss, adversarial loss, cycle consistency loss, identity loss, and neighboring Structural Similarity Index (SSIM) loss. Experiments were conducted on a large dataset consisting of raw PET brain data from 1,224 patients, acquired using a Siemens Biograph Vision PET/CT scanner. Each patient underwent a 120-seconds brain scan. To simulate low-dose PET conditions, images were reconstructed from shortened scan durations of 30, 12, and 5 seconds, corresponding to 1/4, 1/10, and 1/24 of the full-dose acquisition, respectively, using a custom-developed GPU-based image reconstruction software. The results show that Cycle-DCN significantly improves average Peak Signal-to-Noise Ratio (PSNR), SSIM, and Normalized Root Mean Square Error (NRMSE) across three dose levels, with improvements of up to 56%, 35%, and 71%, respectively. Additionally, it achieves contrast-to-noise ratio (CNR) and Edge Preservation Index (EPI) values that closely align with full-dose images, effectively preserving image details, tumor shape, and contrast, while resolving issues with blurred edges. The results of reader studies indicated that the images restored by Cycle-DCN consistently received the highest ratings from nuclear medicine physicians, highlighting their strong clinical relevance.
Related papers
- Cyclic 2.5D Perceptual Loss for Cross-Modal 3D Image Synthesis: T1 MRI to Tau-PET [0.04924932828166548]
Alzheimer's Disease is the most common form of dementia, characterised by cognitive decline and biomarkers such as tau-proteins.
Tau-positron emission tomography (tau-PET) is valuable for early AD diagnosis but is less accessible due to high costs, limited availability, and its invasive nature.
Image synthesis with neural networks enables the generation of tau-PET images from more accessible T1-weighted magnetic resonance imaging (MRI) images.
arXiv Detail & Related papers (2024-06-18T13:59:10Z) - DPER: Diffusion Prior Driven Neural Representation for Limited Angle and Sparse View CT Reconstruction [45.00528216648563]
Diffusion Prior Driven Neural Representation (DPER) is an unsupervised framework designed to address the exceptionally ill-posed CT reconstruction inverse problems.
DPER adopts the Half Quadratic Splitting (HQS) algorithm to decompose the inverse problem into data fidelity and distribution prior sub-problems.
We conduct comprehensive experiments to evaluate the performance of DPER on LACT and ultra-SVCT reconstruction with two public datasets.
arXiv Detail & Related papers (2024-04-27T12:55:13Z) - Deep Few-view High-resolution Photon-counting Extremity CT at Halved Dose for a Clinical Trial [8.393536317952085]
We propose a deep learning-based approach for PCCT image reconstruction at halved dose and doubled speed in a New Zealand clinical trial.
We present a patch-based volumetric refinement network to alleviate the GPU memory limitation, train network with synthetic data, and use model-based iterative refinement to bridge the gap between synthetic and real-world data.
arXiv Detail & Related papers (2024-03-19T00:07:48Z) - WIA-LD2ND: Wavelet-based Image Alignment for Self-supervised Low-Dose CT Denoising [74.14134385961775]
We introduce a novel self-supervised CT image denoising method called WIA-LD2ND, only using NDCT data.
WIA-LD2ND comprises two modules: Wavelet-based Image Alignment (WIA) and Frequency-Aware Multi-scale Loss (FAM)
arXiv Detail & Related papers (2024-03-18T11:20:11Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Generative AI for Rapid Diffusion MRI with Improved Image Quality,
Reliability and Generalizability [3.6119644566822484]
We employ a Swin UNEt Transformers model, trained on augmented Human Connectome Project data, to perform generalized denoising of dMRI.
We demonstrate super-resolution with artificially downsampled HCP data in normal adult volunteers.
We exceed current state-of-the-art denoising methods in accuracy and test-retest reliability of rapid diffusion tensor imaging requiring only 90 seconds of scan time.
arXiv Detail & Related papers (2023-03-10T03:39:23Z) - Automated SSIM Regression for Detection and Quantification of Motion
Artefacts in Brain MR Images [54.739076152240024]
Motion artefacts in magnetic resonance brain images are a crucial issue.
The assessment of MR image quality is fundamental before proceeding with the clinical diagnosis.
An automated image quality assessment based on the structural similarity index (SSIM) regression has been proposed here.
arXiv Detail & Related papers (2022-06-14T10:16:54Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Anatomical-Guided Attention Enhances Unsupervised PET Image Denoising
Performance [0.0]
We propose an unsupervised 3D PET image denoising method based on anatomical information-guided attention mechanism.
Our proposed magnetic resonance-guided deep decoder (MR-GDD) utilizes the spatial details and semantic features of MR-guidance image more effectively.
arXiv Detail & Related papers (2021-09-02T09:27:07Z) - Deep-Learning Driven Noise Reduction for Reduced Flux Computed
Tomography [0.0]
Deep convolutional neural networks (DCNNs) can be used to map low-quality, low-dose images to higher-dose, higher-quality images.
We highlight current results based on micro-CT derived datasets and apply transfer learning to improve DCNN results without increasing training time.
arXiv Detail & Related papers (2021-01-18T23:31:37Z) - OCT-GAN: Single Step Shadow and Noise Removal from Optical Coherence
Tomography Images of the Human Optic Nerve Head [47.812972855826985]
We developed a single process that successfully removed both noise and retinal shadows from unseen single-frame B-scans within 10.4ms.
The proposed algorithm reduces the necessity for long image acquisition times, minimizes expensive hardware requirements and reduces motion artifacts in OCT images.
arXiv Detail & Related papers (2020-10-06T08:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.