Poisson flow consistency models for low-dose CT image denoising
- URL: http://arxiv.org/abs/2402.08159v1
- Date: Tue, 13 Feb 2024 01:39:56 GMT
- Title: Poisson flow consistency models for low-dose CT image denoising
- Authors: Dennis Hein, Adam Wang, and Ge Wang
- Abstract summary: We introduce a novel image denoising technique which combines the flexibility afforded in Poisson flow generative models (PFGM)++ with the, high quality, single step sampling of consistency models.
Our results indicate that the added flexibility of tuning the hyper parameter D, the dimensionality of the augmentation variables in PFGM++, allows us to outperform consistency models.
- Score: 3.6218104434936658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion and Poisson flow models have demonstrated remarkable success for a
wide range of generative tasks. Nevertheless, their iterative nature results in
computationally expensive sampling and the number of function evaluations (NFE)
required can be orders of magnitude larger than for single-step methods.
Consistency models are a recent class of deep generative models which enable
single-step sampling of high quality data without the need for adversarial
training. In this paper, we introduce a novel image denoising technique which
combines the flexibility afforded in Poisson flow generative models (PFGM)++
with the, high quality, single step sampling of consistency models. The
proposed method first learns a trajectory between a noise distribution and the
posterior distribution of interest by training PFGM++ in a supervised fashion.
These pre-trained PFGM++ are subsequently "distilled" into Poisson flow
consistency models (PFCM) via an updated version of consistency distillation.
We call this approach posterior sampling Poisson flow consistency models
(PS-PFCM). Our results indicate that the added flexibility of tuning the
hyperparameter D, the dimensionality of the augmentation variables in PFGM++,
allows us to outperform consistency models, a current state-of-the-art
diffusion-style model with NFE=1 on clinical low-dose CT images. Notably, PFCM
is in itself a novel family of deep generative models and we provide initial
results on the CIFAR-10 dataset.
Related papers
- Flow Generator Matching [35.371071097381346]
Flow Generator Matching (FGM) is designed to accelerate the sampling of flow-matching models into a one-step generation.
On the CIFAR10 unconditional generation benchmark, our one-step FGM model achieves a new record Fr'echet Inception Distance (FID) score of 3.08.
MM-DiT-FGM one-step text-to-image model demonstrates outstanding industry-level performance.
arXiv Detail & Related papers (2024-10-25T05:41:28Z) - Tuning Timestep-Distilled Diffusion Model Using Pairwise Sample Optimization [97.35427957922714]
We present an algorithm named pairwise sample optimization (PSO), which enables the direct fine-tuning of an arbitrary timestep-distilled diffusion model.
PSO introduces additional reference images sampled from the current time-step distilled model, and increases the relative likelihood margin between the training images and reference images.
We show that PSO can directly adapt distilled models to human-preferred generation with both offline and online-generated pairwise preference image data.
arXiv Detail & Related papers (2024-10-04T07:05:16Z) - Provable Statistical Rates for Consistency Diffusion Models [87.28777947976573]
Despite the state-of-the-art performance, diffusion models are known for their slow sample generation due to the extensive number of steps involved.
This paper contributes towards the first statistical theory for consistency models, formulating their training as a distribution discrepancy minimization problem.
arXiv Detail & Related papers (2024-06-23T20:34:18Z) - TC-DiffRecon: Texture coordination MRI reconstruction method based on
diffusion model and modified MF-UNet method [2.626378252978696]
We propose a novel diffusion model-based MRI reconstruction method, named TC-DiffRecon, which does not rely on a specific acceleration factor for training.
We also suggest the incorporation of the MF-UNet module, designed to enhance the quality of MRI images generated by the model.
arXiv Detail & Related papers (2024-02-17T13:09:00Z) - PPFM: Image denoising in photon-counting CT using single-step posterior
sampling Poisson flow generative models [3.7080630916211152]
We present posterior sampling Poisson flow generative models (PPFM), a novel image denoising technique for low-dose and photon-counting CT.
Our results shed light on the benefits of the PFGM++ framework compared to diffusion models.
arXiv Detail & Related papers (2023-12-15T12:49:08Z) - Guided Flows for Generative Modeling and Decision Making [55.42634941614435]
We show that Guided Flows significantly improves the sample quality in conditional image generation and zero-shot text synthesis-to-speech.
Notably, we are first to apply flow models for plan generation in the offline reinforcement learning setting ax speedup in compared to diffusion models.
arXiv Detail & Related papers (2023-11-22T15:07:59Z) - Latent Consistency Models: Synthesizing High-Resolution Images with
Few-Step Inference [60.32804641276217]
We propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs.
A high-quality 768 x 768 24-step LCM takes only 32 A100 GPU hours for training.
We also introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets.
arXiv Detail & Related papers (2023-10-06T17:11:58Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Refining Deep Generative Models via Discriminator Gradient Flow [18.406499703293566]
Discriminator Gradient flow (DGflow) is a new technique that improves generated samples via the gradient flow of entropy-regularized f-divergences.
We show that DGflow leads to significant improvement in the quality of generated samples for a variety of generative models.
arXiv Detail & Related papers (2020-12-01T19:10:15Z) - Normalizing Flows with Multi-Scale Autoregressive Priors [131.895570212956]
We introduce channel-wise dependencies in their latent space through multi-scale autoregressive priors (mAR)
Our mAR prior for models with split coupling flow layers (mAR-SCF) can better capture dependencies in complex multimodal data.
We show that mAR-SCF allows for improved image generation quality, with gains in FID and Inception scores compared to state-of-the-art flow-based models.
arXiv Detail & Related papers (2020-04-08T09:07:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.