BlurDM: A Blur Diffusion Model for Image Deblurring
- URL: http://arxiv.org/abs/2512.03979v1
- Date: Wed, 03 Dec 2025 17:10:44 GMT
- Title: BlurDM: A Blur Diffusion Model for Image Deblurring
- Authors: Jin-Ting He, Fu-Jen Tsai, Yan-Tsung Peng, Min-Hung Chen, Chia-Wen Lin, Yen-Yu Lin,
- Abstract summary: We present a Blur Diffusion Model (BlurDM) for image deblurring.<n>BlurDM implicitly models the blur formation process through a dual-diffusion forward scheme.<n>During the reverse generation process, we derive a dual denoising and deblurring formulation.<n>Experiments demonstrate that BlurDM significantly and consistently enhances existing deblurring methods.
- Score: 52.34718859688771
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models show promise for dynamic scene deblurring; however, existing studies often fail to leverage the intrinsic nature of the blurring process within diffusion models, limiting their full potential. To address it, we present a Blur Diffusion Model (BlurDM), which seamlessly integrates the blur formation process into diffusion for image deblurring. Observing that motion blur stems from continuous exposure, BlurDM implicitly models the blur formation process through a dual-diffusion forward scheme, diffusing both noise and blur onto a sharp image. During the reverse generation process, we derive a dual denoising and deblurring formulation, enabling BlurDM to recover the sharp image by simultaneously denoising and deblurring, given pure Gaussian noise conditioned on the blurred image as input. Additionally, to efficiently integrate BlurDM into deblurring networks, we perform BlurDM in the latent space, forming a flexible prior generation network for deblurring. Extensive experiments demonstrate that BlurDM significantly and consistently enhances existing deblurring methods on four benchmark datasets. The source code is available at https://github.com/Jin-Ting-He/BlurDM.
Related papers
- Residual-based Efficient Bidirectional Diffusion Model for Image Dehazing and Haze Generation [17.043633726365233]
Current deep dehazing methods only focus on removing haze from hazy images, lacking the capability to translate between hazy and haze-free images.<n>We propose a residual-based efficient bidirectional diffusion model (RBDM) that can model the conditional distributions for both dehazing and haze generation.<n>Our RBDM successfully implements size-agnostic bidirectional transitions between haze-free and hazy images with only 15 sampling steps.
arXiv Detail & Related papers (2025-08-15T01:00:15Z) - BokehDiff: Neural Lens Blur with One-Step Diffusion [62.59018200914645]
We introduce BokehDiff, a lens blur rendering method that achieves physically accurate and visually appealing outcomes.<n>Our method employs a physics-inspired self-attention module that aligns with the image formation process.<n>We adapt the diffusion model to the one-step inference scheme without introducing additional noise, and achieve results of high quality and fidelity.
arXiv Detail & Related papers (2025-07-24T03:23:19Z) - One-Step Diffusion Model for Image Motion-Deblurring [85.76149042561507]
We propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step.<n>To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration.<n>Our method achieves strong performance on both full and no-reference metrics.
arXiv Detail & Related papers (2025-03-09T09:39:57Z) - ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation [45.582704677784825]
Implicit Diffusion-based reBLurring AUgmentation (ID-Blau) is proposed to generate diverse blurred images by simulating motion trajectories in a continuous space.
By sampling diverse blur conditions, ID-Blau can generate various blurred images unseen in the training set.
Results demonstrate that ID-Blau can produce realistic blurred images for training and thus significantly improve performance for state-of-the-art deblurring models.
arXiv Detail & Related papers (2023-12-18T07:47:43Z) - Gradpaint: Gradient-Guided Inpainting with Diffusion Models [71.47496445507862]
Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved remarkable results in conditional and unconditional image generation.
We present GradPaint, which steers the generation towards a globally coherent image.
We generalizes well to diffusion models trained on various datasets, improving upon current state-of-the-art supervised and unsupervised methods.
arXiv Detail & Related papers (2023-09-18T09:36:24Z) - Residual Denoising Diffusion Models [12.698791701225499]
We propose a novel dual diffusion process that decouples the traditional single denoising diffusion process into residual diffusion and noise diffusion.
This dual diffusion framework expands the denoising-based diffusion models into a unified and interpretable model for both image generation and restoration.
We provide code and pre-trained models to encourage further exploration, application, and development of our innovative framework.
arXiv Detail & Related papers (2023-08-25T23:54:15Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - VideoFusion: Decomposed Diffusion Models for High-Quality Video
Generation [88.49030739715701]
This work presents a decomposed diffusion process via resolving the per-frame noise into a base noise that is shared among all frames and a residual noise that varies along the time axis.
Experiments on various datasets confirm that our approach, termed as VideoFusion, surpasses both GAN-based and diffusion-based alternatives in high-quality video generation.
arXiv Detail & Related papers (2023-03-15T02:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.