Towards Lower-Dose PET using Physics-Based Uncertainty-Aware Multimodal
Learning with Robustness to Out-of-Distribution Data
- URL: http://arxiv.org/abs/2107.09892v1
- Date: Wed, 21 Jul 2021 06:18:10 GMT
- Title: Towards Lower-Dose PET using Physics-Based Uncertainty-Aware Multimodal
Learning with Robustness to Out-of-Distribution Data
- Authors: Viswanath P. Sudarshan, Uddeshya Upadhyay, Gary F. Egan, Zhaolin Chen,
Suyash P. Awate
- Abstract summary: Reducing the PET radiotracer dose or acquisition time reduces photon counts, which can deteriorate image quality.
Recent deep-neural-network (DNN) based methods for image-to-image translation enable the mapping of low-quality PET images.
Our framework, suDNN, estimates a standard-dose PET image using multimodal input in the form of a low-dose/low-count PET image.
- Score: 2.624902795082451
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Radiation exposure in positron emission tomography (PET) imaging limits its
usage in the studies of radiation-sensitive populations, e.g., pregnant women,
children, and adults that require longitudinal imaging. Reducing the PET
radiotracer dose or acquisition time reduces photon counts, which can
deteriorate image quality. Recent deep-neural-network (DNN) based methods for
image-to-image translation enable the mapping of low-quality PET images
(acquired using substantially reduced dose), coupled with the associated
magnetic resonance imaging (MRI) images, to high-quality PET images. However,
such DNN methods focus on applications involving test data that match the
statistical characteristics of the training data very closely and give little
attention to evaluating the performance of these DNNs on new
out-of-distribution (OOD) acquisitions. We propose a novel DNN formulation that
models the (i) underlying sinogram-based physics of the PET imaging system and
(ii) the uncertainty in the DNN output through the per-voxel heteroscedasticity
of the residuals between the predicted and the high-quality reference images.
Our sinogram-based uncertainty-aware DNN framework, namely, suDNN, estimates a
standard-dose PET image using multimodal input in the form of (i) a
low-dose/low-count PET image and (ii) the corresponding multi-contrast MRI
images, leading to improved robustness of suDNN to OOD acquisitions. Results on
in vivo simultaneous PET-MRI, and various forms of OOD data in PET-MRI, show
the benefits of suDNN over the current state of the art, quantitatively and
qualitatively.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Functional Imaging Constrained Diffusion for Brain PET Synthesis from Structural MRI [5.190302448685122]
We propose a framework for 3D brain PET image synthesis with paired structural MRI as input condition, through a new constrained diffusion model (CDM)
The FICD introduces noise to PET and then progressively removes it with CDM, ensuring high output fidelity throughout a stable training phase.
The CDM learns to predict denoised PET with a functional imaging constraint introduced to ensure voxel-wise alignment between each denoised PET and its ground truth.
arXiv Detail & Related papers (2024-05-03T22:33:46Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - PET Synthesis via Self-supervised Adaptive Residual Estimation
Generative Adversarial Network [14.381830012670969]
Recent methods to generate high-quality PET images from low-dose counterparts have been reported to be state-of-the-art for low-to-high image recovery methods.
To address these issues, we developed a self-supervised adaptive residual estimation generative adversarial network (SS-AEGAN)
SS-AEGAN consistently outperformed the state-of-the-art synthesis methods with various dose reduction factors.
arXiv Detail & Related papers (2023-10-24T06:43:56Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - CG-3DSRGAN: A classification guided 3D generative adversarial network
for image quality recovery from low-dose PET images [10.994223928445589]
High radioactivity caused by the injected tracer dose is a major concern in PET imaging.
Reducing the dose leads to inadequate image quality for diagnostic practice.
CNNs-based methods have been developed for high quality PET synthesis from its low-dose counterparts.
arXiv Detail & Related papers (2023-04-03T05:39:02Z) - Self-Supervised Pre-Training for Deep Image Prior-Based Robust PET Image
Denoising [0.5999777817331317]
Deep image prior (DIP) has been successfully applied to positron emission tomography (PET) image restoration.
We propose a self-supervised pre-training model to improve the DIP-based PET image denoising performance.
arXiv Detail & Related papers (2023-02-27T06:55:00Z) - ShuffleUNet: Super resolution of diffusion-weighted MRIs using deep
learning [47.68307909984442]
Single Image Super-Resolution (SISR) is a technique aimed to obtain high-resolution (HR) details from one single low-resolution input image.
Deep learning extracts prior knowledge from big datasets and produces superior MRI images from the low-resolution counterparts.
arXiv Detail & Related papers (2021-02-25T14:52:23Z) - FREA-Unet: Frequency-aware U-net for Modality Transfer [9.084926957557842]
We propose a new frequency-aware attention U-net for generating synthetic PET images from MRI data.
Our attention Unet computes the attention scores for feature maps in low/high frequency layers and use it to help the model focus more on the most important regions.
arXiv Detail & Related papers (2020-12-31T01:58:44Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.