RestoreGrad: Signal Restoration Using Conditional Denoising Diffusion Models with Jointly Learned Prior
- URL: http://arxiv.org/abs/2502.13574v1
- Date: Wed, 19 Feb 2025 09:29:46 GMT
- Title: RestoreGrad: Signal Restoration Using Conditional Denoising Diffusion Models with Jointly Learned Prior
- Authors: Ching-Hua Lee, Chouchang Yang, Jaejin Cho, Yashas Malur Saidutta, Rakshith Sharma Srinivasa, Yilin Shen, Hongxia Jin,
- Abstract summary: We propose to improve conditional DDPMs for signal restoration by leveraging a more informative prior.
The proposed framework, called RestoreGrad, seamlessly integrates DDPMs into the variational autoencoder framework.
On speech and image restoration tasks, we show that RestoreGrad demonstrates faster convergence (5-10 times fewer training steps) to achieve better quality of restored signals.
- Score: 42.55917146449122
- License:
- Abstract: Denoising diffusion probabilistic models (DDPMs) can be utilized for recovering a clean signal from its degraded observation(s) by conditioning the model on the degraded signal. The degraded signals are themselves contaminated versions of the clean signals; due to this correlation, they may encompass certain useful information about the target clean data distribution. However, existing adoption of the standard Gaussian as the prior distribution in turn discards such information, resulting in sub-optimal performance. In this paper, we propose to improve conditional DDPMs for signal restoration by leveraging a more informative prior that is jointly learned with the diffusion model. The proposed framework, called RestoreGrad, seamlessly integrates DDPMs into the variational autoencoder framework and exploits the correlation between the degraded and clean signals to encode a better diffusion prior. On speech and image restoration tasks, we show that RestoreGrad demonstrates faster convergence (5-10 times fewer training steps) to achieve better quality of restored signals over existing DDPM baselines, and improved robustness to using fewer sampling steps in inference time (2-2.5 times fewer), advocating the advantages of leveraging jointly learned prior for efficiency improvements in the diffusion process.
Related papers
- RDPM: Solve Diffusion Probabilistic Models via Recurrent Token Prediction [17.005198258689035]
Diffusion Probabilistic Models (DPMs) have emerged as the de facto approach for high-fidelity image synthesis.
We introduce a novel generative framework, the Recurrent Diffusion Probabilistic Model (RDPM), which enhances the diffusion process through a recurrent token prediction mechanism.
arXiv Detail & Related papers (2024-12-24T12:28:19Z) - RED: Residual Estimation Diffusion for Low-Dose PET Sinogram Reconstruction [8.152999560646371]
We propose a diffusion model named residual esti-mation diffusion (RED)
From the perspective of diffusion mechanism, RED uses the residual between sinograms to replace Gaussian noise in diffusion process.
Experiments show that RED effec-tively improves the quality of low-dose sinograms as well as the reconstruction results.
arXiv Detail & Related papers (2024-11-08T06:19:29Z) - LoRID: Low-Rank Iterative Diffusion for Adversarial Purification [3.735798190358]
This work presents an information-theoretic examination of diffusion-based purification methods.
We introduce LoRID, a novel Low-Rank Iterative Diffusion purification method designed to remove adversarial perturbation with low intrinsic purification errors.
LoRID achieves superior robustness performance in CIFAR-10/100, CelebA-HQ, and ImageNet datasets under both white-box and black-box settings.
arXiv Detail & Related papers (2024-09-12T17:51:25Z) - Integrating Amortized Inference with Diffusion Models for Learning Clean Distribution from Corrupted Images [19.957503854446735]
Diffusion models (DMs) have emerged as powerful generative models for solving inverse problems.
FlowDiff is a joint training paradigm that leverages a conditional normalizing flow model to facilitate the training of diffusion models on corrupted data sources.
Our experiment shows that FlowDiff can effectively learn clean distributions across a wide range of corrupted data sources.
arXiv Detail & Related papers (2024-07-15T18:33:20Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - BlindDiff: Empowering Degradation Modelling in Diffusion Models for Blind Image Super-Resolution [52.47005445345593]
BlindDiff is a DM-based blind SR method to tackle the blind degradation settings in SISR.
BlindDiff seamlessly integrates the MAP-based optimization into DMs.
Experiments on both synthetic and real-world datasets show that BlindDiff achieves the state-of-the-art performance.
arXiv Detail & Related papers (2024-03-15T11:21:34Z) - Conditional Denoising Diffusion Probabilistic Models for Data Reconstruction Enhancement in Wireless Communications [12.218161437914118]
conditional denoising diffusion probabilistic models (DDPMs) are proposed to enhance the data transmission and reconstruction over wireless channels.
Inspired by this, the key idea is to leverage the generative prior of diffusion models in learning a "noisy-to-clean" transformation of the information signal.
The proposed scheme could be beneficial for communication scenarios in which a prior knowledge of the information content is available.
arXiv Detail & Related papers (2023-10-30T11:33:01Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - Exploiting Diffusion Prior for Real-World Image Super-Resolution [75.5898357277047]
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution.
By employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model.
arXiv Detail & Related papers (2023-05-11T17:55:25Z) - Dimensionality-Varying Diffusion Process [52.52681373641533]
Diffusion models learn to reverse a signal destruction process to generate new data.
We make a theoretical generalization of the forward diffusion process via signal decomposition.
We show that our strategy facilitates high-resolution image synthesis and improves FID of diffusion model trained on FFHQ at $1024times1024$ resolution from 52.40 to 10.46.
arXiv Detail & Related papers (2022-11-29T09:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.