CoreDiff: Contextual Error-Modulated Generalized Diffusion Model for
Low-Dose CT Denoising and Generalization
- URL: http://arxiv.org/abs/2304.01814v2
- Date: Fri, 6 Oct 2023 12:57:53 GMT
- Title: CoreDiff: Contextual Error-Modulated Generalized Diffusion Model for
Low-Dose CT Denoising and Generalization
- Authors: Qi Gao, Zilong Li, Junping Zhang, Yi Zhang, Hongming Shan
- Abstract summary: Low-dose computed tomography (LDCT) images suffer from noise and artifacts due to photon starvation and electronic noise.
This paper presents a novel COntextual eRror-modulated gEneralized Diffusion model for low-dose CT (LDCT) denoising, termed CoreDiff.
- Score: 41.64072751889151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-dose computed tomography (CT) images suffer from noise and artifacts due
to photon starvation and electronic noise. Recently, some works have attempted
to use diffusion models to address the over-smoothness and training instability
encountered by previous deep-learning-based denoising models. However,
diffusion models suffer from long inference times due to the large number of
sampling steps involved. Very recently, cold diffusion model generalizes
classical diffusion models and has greater flexibility. Inspired by the cold
diffusion, this paper presents a novel COntextual eRror-modulated gEneralized
Diffusion model for low-dose CT (LDCT) denoising, termed CoreDiff. First,
CoreDiff utilizes LDCT images to displace the random Gaussian noise and employs
a novel mean-preserving degradation operator to mimic the physical process of
CT degradation, significantly reducing sampling steps thanks to the informative
LDCT images as the starting point of the sampling process. Second, to alleviate
the error accumulation problem caused by the imperfect restoration operator in
the sampling process, we propose a novel ContextuaL Error-modulAted Restoration
Network (CLEAR-Net), which can leverage contextual information to constrain the
sampling process from structural distortion and modulate time step embedding
features for better alignment with the input at the next time step. Third, to
rapidly generalize to a new, unseen dose level with as few resources as
possible, we devise a one-shot learning framework to make CoreDiff generalize
faster and better using only a single LDCT image (un)paired with NDCT.
Extensive experimental results on two datasets demonstrate that our CoreDiff
outperforms competing methods in denoising and generalization performance, with
a clinically acceptable inference time. Source code is made available at
https://github.com/qgao21/CoreDiff.
Related papers
- BlindDiff: Empowering Degradation Modelling in Diffusion Models for Blind Image Super-Resolution [52.47005445345593]
BlindDiff is a DM-based blind SR method to tackle the blind degradation settings in SISR.
BlindDiff seamlessly integrates the MAP-based optimization into DMs.
Experiments on both synthetic and real-world datasets show that BlindDiff achieves the state-of-the-art performance.
arXiv Detail & Related papers (2024-03-15T11:21:34Z) - TC-DiffRecon: Texture coordination MRI reconstruction method based on
diffusion model and modified MF-UNet method [2.626378252978696]
We propose a novel diffusion model-based MRI reconstruction method, named TC-DiffRecon, which does not rely on a specific acceleration factor for training.
We also suggest the incorporation of the MF-UNet module, designed to enhance the quality of MRI images generated by the model.
arXiv Detail & Related papers (2024-02-17T13:09:00Z) - Resfusion: Denoising Diffusion Probabilistic Models for Image Restoration Based on Prior Residual Noise [34.65659277870287]
Research on denoising diffusion models has expanded its application to the field of image restoration.
We propose Resfusion, a framework that incorporates the residual term into the diffusion forward process.
We show that Resfusion exhibits competitive performance on ISTD dataset, LOL dataset and Raindrop dataset with only five sampling steps.
arXiv Detail & Related papers (2023-11-25T02:09:38Z) - Latent Consistency Models: Synthesizing High-Resolution Images with
Few-Step Inference [60.32804641276217]
We propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs.
A high-quality 768 x 768 24-step LCM takes only 32 A100 GPU hours for training.
We also introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets.
arXiv Detail & Related papers (2023-10-06T17:11:58Z) - Gradpaint: Gradient-Guided Inpainting with Diffusion Models [71.47496445507862]
Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved remarkable results in conditional and unconditional image generation.
We present GradPaint, which steers the generation towards a globally coherent image.
We generalizes well to diffusion models trained on various datasets, improving upon current state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2023-09-18T09:36:24Z) - PartDiff: Image Super-resolution with Partial Diffusion Models [3.8435187580887717]
Denoising diffusion probabilistic models (DDPMs) have achieved impressive performance on various image generation tasks.
DDPMs generate new data by iteratively denoising from random noise.
But diffusion-based generative models suffer from high computational costs due to the large number of denoising steps.
This paper proposes the Partial Diffusion Model (PartDiff), which diffuses the image to an intermediate latent state instead of pure random noise.
arXiv Detail & Related papers (2023-07-21T22:11:23Z) - ACDMSR: Accelerated Conditional Diffusion Models for Single Image
Super-Resolution [84.73658185158222]
We propose a diffusion model-based super-resolution method called ACDMSR.
Our method adapts the standard diffusion model to perform super-resolution through a deterministic iterative denoising process.
Our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.
arXiv Detail & Related papers (2023-07-03T06:49:04Z) - Q-Diffusion: Quantizing Diffusion Models [52.978047249670276]
Post-training quantization (PTQ) is considered a go-to compression method for other tasks.
We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture.
We show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance.
arXiv Detail & Related papers (2023-02-08T19:38:59Z) - DOLCE: A Model-Based Probabilistic Diffusion Framework for Limited-Angle
CT Reconstruction [42.028139152832466]
Limited-Angle Computed Tomography (LACT) is a non-destructive evaluation technique used in a variety of applications ranging from security to medicine.
We present DOLCE, a new deep model-based framework for LACT that uses a conditional diffusion model as an image prior.
arXiv Detail & Related papers (2022-11-22T15:30:38Z) - Unsupervised Denoising of Retinal OCT with Diffusion Probabilistic Model [0.2578242050187029]
We present a diffusion probabilistic model that is fully unsupervised to learn from noise instead of signal.
Our method can significantly improve the image quality with a simple working pipeline and a small amount of training data.
arXiv Detail & Related papers (2022-01-27T19:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.