Implicit neural representations for end-to-end PET reconstruction
- URL: http://arxiv.org/abs/2503.21825v1
- Date: Wed, 26 Mar 2025 08:30:53 GMT
- Title: Implicit neural representations for end-to-end PET reconstruction
- Authors: Younès Moussaoui, Diana Mateus, Nasrin Taheri, Saïd Moussaoui, Thomas Carlier, Simon Stute,
- Abstract summary: Implicit neural representations (INRs) have demonstrated strong capabilities in various medical imaging tasks.<n>We propose an unsupervised PET image reconstruction method based on the implicit SIREN neural network architecture.<n>Our method incorporates a forward projection model and a loss function adapted to perform PET image reconstruction directly from sinograms.
- Score: 3.7066816275267627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural representations (INRs) have demonstrated strong capabilities in various medical imaging tasks, such as denoising, registration, and segmentation, by representing images as continuous functions, allowing complex details to be captured. For image reconstruction problems, INRs can also reduce artifacts typically introduced by conventional reconstruction algorithms. However, to the best of our knowledge, INRs have not been studied in the context of PET reconstruction. In this paper, we propose an unsupervised PET image reconstruction method based on the implicit SIREN neural network architecture using sinusoidal activation functions. Our method incorporates a forward projection model and a loss function adapted to perform PET image reconstruction directly from sinograms, without the need for large training datasets. The performance of the proposed approach was compared with that of conventional penalized likelihood methods and deep image prior (DIP) based reconstruction using brain phantom data and realistically simulated sinograms. The results show that the INR-based approach can reconstruct high-quality images with a simpler, more efficient model, offering improvements in PET image reconstruction, particularly in terms of contrast, activity recovery, and relative bias.
Related papers
- Re-Visible Dual-Domain Self-Supervised Deep Unfolding Network for MRI Reconstruction [48.30341580103962]
We propose a novel re-visible dual-domain self-supervised deep unfolding network to address these issues.<n>We design a deep unfolding network based on Chambolle and Pock Proximal Point Algorithm (DUN-CP-PPA) to achieve end-to-end reconstruction.<n> Experiments conducted on the fastMRI and IXI datasets demonstrate that our method significantly outperforms state-of-the-art approaches in terms of reconstruction performance.
arXiv Detail & Related papers (2025-01-07T12:29:32Z) - Multi-Subject Image Synthesis as a Generative Prior for Single-Subject PET Image Reconstruction [40.34650079545031]
We propose a novel method for synthesising diverse and realistic pseudo-PET images with improved signal-to-noise ratio.<n>We show how our pseudo-PET images may be exploited as a generative prior for single-subject PET image reconstruction.
arXiv Detail & Related papers (2024-12-05T16:40:33Z) - DensePANet: An improved generative adversarial network for photoacoustic tomography image reconstruction from sparse data [1.4665304971699265]
We propose an end-to-end method called DensePANet to solve the problem of PAT image reconstruction from sparse data.
The proposed model employs a novel modification of UNet in its generator, called FD-UNet++, which considerably improves the reconstruction performance.
arXiv Detail & Related papers (2024-04-19T09:52:32Z) - Double-Flow GAN model for the reconstruction of perceived faces from brain activities [13.707575848841405]
We proposed a novel reconstruction framework, which we called Double-Flow GAN.<n>We also designed a pretraining process that uses features extracted from images as conditions for making it possible to pretrain the conditional reconstruction model from fMRI.<n>Results showed that the proposed method is significant at accurately reconstructing multiple face attributes, outperforms the previous reconstruction models, and exhibited state-of-the-art reconstruction abilities.
arXiv Detail & Related papers (2023-12-12T18:07:57Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - A Long Short-term Memory Based Recurrent Neural Network for
Interventional MRI Reconstruction [50.1787181309337]
We propose a convolutional long short-term memory (Conv-LSTM) based recurrent neural network (RNN), or ConvLR, to reconstruct interventional images with golden-angle radial sampling.
The proposed algorithm has the potential to achieve real-time i-MRI for DBS and can be used for general purpose MR-guided intervention.
arXiv Detail & Related papers (2022-03-28T14:03:45Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Direct PET Image Reconstruction Incorporating Deep Image Prior and a
Forward Projection Model [0.0]
Convolutional neural networks (CNNs) have recently achieved remarkable performance in positron emission tomography (PET) image reconstruction.
We propose an unsupervised direct PET image reconstruction method that incorporates a deep image prior framework.
Our proposed method incorporates a forward projection model with a loss function to achieve unsupervised direct PET image reconstruction from sinograms.
arXiv Detail & Related papers (2021-09-02T08:07:58Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Clinically Translatable Direct Patlak Reconstruction from Dynamic PET
with Motion Correction Using Convolutional Neural Network [9.949523630885261]
Patlak model is widely used in 18F-FDG dynamic positron emission tomography (PET) imaging.
In this work, we proposed a data-driven framework which maps the dynamic PET images to the high-quality motion-corrected direct Patlak images.
arXiv Detail & Related papers (2020-09-13T02:51:25Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.