PPFM: Image denoising in photon-counting CT using single-step posterior
sampling Poisson flow generative models
- URL: http://arxiv.org/abs/2312.09754v2
- Date: Tue, 19 Dec 2023 15:44:15 GMT
- Title: PPFM: Image denoising in photon-counting CT using single-step posterior
sampling Poisson flow generative models
- Authors: Dennis Hein, Staffan Holmin, Timothy Szczykutowicz, Jonathan S Maltz,
Mats Danielsson, Ge Wang, Mats Persson
- Abstract summary: We present posterior sampling Poisson flow generative models (PPFM), a novel image denoising technique for low-dose and photon-counting CT.
Our results shed light on the benefits of the PFGM++ framework compared to diffusion models.
- Score: 3.7080630916211152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion and Poisson flow models have shown impressive performance in a wide
range of generative tasks, including low-dose CT image denoising. However, one
limitation in general, and for clinical applications in particular, is slow
sampling. Due to their iterative nature, the number of function evaluations
(NFE) required is usually on the order of $10-10^3$, both for conditional and
unconditional generation. In this paper, we present posterior sampling Poisson
flow generative models (PPFM), a novel image denoising technique for low-dose
and photon-counting CT that produces excellent image quality whilst keeping
NFE=1. Updating the training and sampling processes of Poisson flow generative
models (PFGM)++, we learn a conditional generator which defines a trajectory
between the prior noise distribution and the posterior distribution of
interest. We additionally hijack and regularize the sampling process to achieve
NFE=1. Our results shed light on the benefits of the PFGM++ framework compared
to diffusion models. In addition, PPFM is shown to perform favorably compared
to current state-of-the-art diffusion-style models with NFE=1, consistency
models, as well as popular deep learning and non-deep learning-based image
denoising techniques, on clinical low-dose CT images and clinical images from a
prototype photon-counting CT system.
Related papers
- Enhancing Low Dose Computed Tomography Images Using Consistency Training Techniques [7.694256285730863]
In this paper, we introduce the beta noise distribution, which provides flexibility in adjusting noise levels.
High Noise Improved Consistency Training (HN-iCT) is trained in a supervised fashion.
Our results indicate that unconditional image generation using HN-iCT significantly outperforms basic CT and iCT training techniques with NFE=1.
arXiv Detail & Related papers (2024-11-19T02:48:36Z) - Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - EM Distillation for One-step Diffusion Models [65.57766773137068]
We propose a maximum likelihood-based approach that distills a diffusion model to a one-step generator model with minimal loss of quality.
We develop a reparametrized sampling scheme and a noise cancellation technique that together stabilizes the distillation process.
arXiv Detail & Related papers (2024-05-27T05:55:22Z) - Poisson flow consistency models for low-dose CT image denoising [3.6218104434936658]
We introduce a novel image denoising technique which combines the flexibility afforded in Poisson flow generative models (PFGM)++ with the, high quality, single step sampling of consistency models.
Our results indicate that the added flexibility of tuning the hyper parameter D, the dimensionality of the augmentation variables in PFGM++, allows us to outperform consistency models.
arXiv Detail & Related papers (2024-02-13T01:39:56Z) - Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - Diffusion Probabilistic Priors for Zero-Shot Low-Dose CT Image Denoising [10.854795474105366]
Denoising low-dose computed tomography (CT) images is a critical task in medical image computing.
Existing unsupervised deep learning-based methods often require training with a large number of low-dose CT images.
We propose a novel unsupervised method that only utilizes normal-dose CT images during training, enabling zero-shot denoising of low-dose CT images.
arXiv Detail & Related papers (2023-05-25T09:38:52Z) - CoreDiff: Contextual Error-Modulated Generalized Diffusion Model for
Low-Dose CT Denoising and Generalization [41.64072751889151]
Low-dose computed tomography (LDCT) images suffer from noise and artifacts due to photon starvation and electronic noise.
This paper presents a novel COntextual eRror-modulated gEneralized Diffusion model for low-dose CT (LDCT) denoising, termed CoreDiff.
arXiv Detail & Related papers (2023-04-04T14:13:13Z) - Q-Diffusion: Quantizing Diffusion Models [52.978047249670276]
Post-training quantization (PTQ) is considered a go-to compression method for other tasks.
We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture.
We show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance.
arXiv Detail & Related papers (2023-02-08T19:38:59Z) - ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion
Trajectories [144.03939123870416]
We propose a novel conditional diffusion model by introducing conditions into the forward process.
We use extra latent space to allocate an exclusive diffusion trajectory for each condition based on some shifting rules.
We formulate our method, which we call textbfShiftDDPMs, and provide a unified point of view on existing related methods.
arXiv Detail & Related papers (2023-02-05T12:48:21Z) - On Distillation of Guided Diffusion Models [94.95228078141626]
We propose an approach to distilling classifier-free guided diffusion models into models that are fast to sample from.
For standard diffusion models trained on the pixelspace, our approach is able to generate images visually comparable to that of the original model.
For diffusion models trained on the latent-space (e.g., Stable Diffusion), our approach is able to generate high-fidelity images using as few as 1 to 4 denoising steps.
arXiv Detail & Related papers (2022-10-06T18:03:56Z) - Unsupervised Denoising of Retinal OCT with Diffusion Probabilistic Model [0.2578242050187029]
We present a diffusion probabilistic model that is fully unsupervised to learn from noise instead of signal.
Our method can significantly improve the image quality with a simple working pipeline and a small amount of training data.
arXiv Detail & Related papers (2022-01-27T19:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.