Multi-Branch Generative Models for Multichannel Imaging with an Application to PET/CT Synergistic Reconstruction
- URL: http://arxiv.org/abs/2404.08748v3
- Date: Fri, 22 Nov 2024 17:42:10 GMT
- Title: Multi-Branch Generative Models for Multichannel Imaging with an Application to PET/CT Synergistic Reconstruction
- Authors: Noel Jeffrey Pinton, Alexandre Bousse, Catherine Cheze-Le-Rest, Dimitris Visvikis,
- Abstract summary: This paper presents a novel approach for learned synergistic reconstruction of medical images using multi-branch generative models.
We demonstrate the efficacy of our approach on both Modified National Institute of Standards and Technology (MNIST) and positron emission tomography (PET)/ computed tomography (CT) datasets.
- Score: 42.95604565673447
- License:
- Abstract: This paper presents a novel approach for learned synergistic reconstruction of medical images using multi-branch generative models. Leveraging variational autoencoders (VAEs), our model learns from pairs of images simultaneously, enabling effective denoising and reconstruction. Synergistic image reconstruction is achieved by incorporating the trained models in a regularizer that evaluates the distance between the images and the model. We demonstrate the efficacy of our approach on both Modified National Institute of Standards and Technology (MNIST) and positron emission tomography (PET)/computed tomography (CT) datasets, showcasing improved image quality for low-dose imaging. Despite challenges such as patch decomposition and model limitations, our results underscore the potential of generative models for enhancing medical imaging reconstruction.
Related papers
- Iterative CT Reconstruction via Latent Variable Optimization of Shallow Diffusion Models [1.4019041243188557]
We propose a novel computed tomography (CT) reconstruction method by combining the denoising diffusion probabilistic model with iterative CT reconstruction.
We demonstrated the effectiveness of the proposed method through the sparse-projection CT reconstruction of 1/10 projection data.
arXiv Detail & Related papers (2024-08-06T12:55:17Z) - DensePANet: An improved generative adversarial network for photoacoustic tomography image reconstruction from sparse data [1.4665304971699265]
We propose an end-to-end method called DensePANet to solve the problem of PAT image reconstruction from sparse data.
The proposed model employs a novel modification of UNet in its generator, called FD-UNet++, which considerably improves the reconstruction performance.
arXiv Detail & Related papers (2024-04-19T09:52:32Z) - Paired Diffusion: Generation of related, synthetic PET-CT-Segmentation scans using Linked Denoising Diffusion Probabilistic Models [0.0]
This research introduces a novel architecture that is able to generate multiple, related PET-CT-tumour mask pairs using paired networks and conditional encoders.
Our approach includes innovative, time step-controlled mechanisms and a noise-seeding' strategy to improve DDPM sampling consistency.
arXiv Detail & Related papers (2024-03-26T14:21:49Z) - Retinal OCT Synthesis with Denoising Diffusion Probabilistic Models for
Layer Segmentation [2.4113205575263708]
We propose an image synthesis method that utilizes denoising diffusion probabilistic models (DDPMs) to automatically generate retinal optical coherence tomography ( OCT) images.
We observe a consistent improvement in layer segmentation accuracy, which is validated using various neural networks.
These findings demonstrate the promising potential of DDPMs in reducing the need for manual annotations of retinal OCT images.
arXiv Detail & Related papers (2023-11-09T16:09:24Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Multi-Channel Convolutional Analysis Operator Learning for Dual-Energy
CT Reconstruction [108.06731611196291]
We develop a multi-channel convolutional analysis operator learning (MCAOL) method to exploit common spatial features within attenuation images at different energies.
We propose an optimization method which jointly reconstructs the attenuation images at low and high energies with a mixed norm regularization on the sparse features.
arXiv Detail & Related papers (2022-03-10T14:22:54Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - LEARN++: Recurrent Dual-Domain Reconstruction Network for Compressed
Sensing CT [17.168584459606272]
The LEARN++ model integrates two parallel and interactiveworks to perform image restoration and sinogram inpainting operations on both the image and projection domains simultaneously.
Results show that the proposed LEARN++ model achieves competitive qualitative and quantitative results compared to several state-of-the-art methods in terms of both artifact reduction and detail preservation.
arXiv Detail & Related papers (2020-12-13T07:00:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.