BUFF: Bayesian Uncertainty Guided Diffusion Probabilistic Model for Single Image Super-Resolution
- URL: http://arxiv.org/abs/2504.03490v1
- Date: Fri, 04 Apr 2025 14:43:45 GMT
- Title: BUFF: Bayesian Uncertainty Guided Diffusion Probabilistic Model for Single Image Super-Resolution
- Authors: Zihao He, Shengchuan Zhang, Runze Hu, Yunhang Shen, Yan Zhang,
- Abstract summary: We introduce the Bayesian Uncertainty Guided Diffusion Probabilistic Model (BUFF)<n>BUFF distinguishes itself by incorporating a Bayesian network to generate high-resolution uncertainty masks.<n>It significantly mitigates artifacts and blurring in areas characterized by complex textures and fine details.
- Score: 19.568467335629094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Super-resolution (SR) techniques are critical for enhancing image quality, particularly in scenarios where high-resolution imagery is essential yet limited by hardware constraints. Existing diffusion models for SR have relied predominantly on Gaussian models for noise generation, which often fall short when dealing with the complex and variable texture inherent in natural scenes. To address these deficiencies, we introduce the Bayesian Uncertainty Guided Diffusion Probabilistic Model (BUFF). BUFF distinguishes itself by incorporating a Bayesian network to generate high-resolution uncertainty masks. These masks guide the diffusion process, allowing for the adjustment of noise intensity in a manner that is both context-aware and adaptive. This novel approach not only enhances the fidelity of super-resolved images to their original high-resolution counterparts but also significantly mitigates artifacts and blurring in areas characterized by complex textures and fine details. The model demonstrates exceptional robustness against complex noise patterns and showcases superior adaptability in handling textures and edges within images. Empirical evidence, supported by visual results, illustrates the model's robustness, especially in challenging scenarios, and its effectiveness in addressing common SR issues such as blurring. Experimental evaluations conducted on the DIV2K dataset reveal that BUFF achieves a notable improvement, with a +0.61 increase compared to baseline in SSIM on BSD100, surpassing traditional diffusion approaches by an average additional +0.20dB PSNR gain. These findings underscore the potential of Bayesian methods in enhancing diffusion processes for SR, paving the way for future advancements in the field.
Related papers
- R-Meshfusion: Reinforcement Learning Powered Sparse-View Mesh Reconstruction with Diffusion Priors [6.102039677329314]
Mesh reconstruction from multi-view images degrades significantly under sparse-view conditions.
Recent advances in diffusion models have demonstrated strong capabilities in synthesizing novel views from limited inputs.
We propose a novel framework that leverages diffusion models to enhance sparse-view mesh reconstruction in a principled and reliable manner.
arXiv Detail & Related papers (2025-04-16T10:23:59Z) - InpDiffusion: Image Inpainting Localization via Conditional Diffusion Models [10.213390634031049]
Current IIL methods face two main challenges: a tendency towards overconfidence and difficulty in detecting subtle tampering boundaries.<n>We propose a new paradigm that treats IIL as a conditional mask generation task utilizing diffusion models.<n>Our method, InpDiffusion, utilizes the denoising process enhanced by the integration of image semantic conditions to progressively refine predictions.
arXiv Detail & Related papers (2025-01-06T07:32:12Z) - Diffusion Prior Interpolation for Flexibility Real-World Face Super-Resolution [48.34173818491552]
Diffusion Prior Interpolation (DPI) can balance consistency and diversity and can be seamlessly integrated into pre-trained models.
In extensive experiments conducted on synthetic and real datasets, DPI demonstrates superiority over SOTA FSR methods.
arXiv Detail & Related papers (2024-12-21T09:28:44Z) - FaithDiff: Unleashing Diffusion Priors for Faithful Image Super-resolution [48.88184541515326]
We propose a simple and effective method, named FaithDiff, to fully harness the power of latent diffusion models (LDMs) for faithful image SR.<n>In contrast to existing diffusion-based SR methods that freeze the diffusion model pre-trained on high-quality images, we propose to unleash the diffusion prior to identify useful information and recover faithful structures.
arXiv Detail & Related papers (2024-11-27T23:58:03Z) - Digging into contrastive learning for robust depth estimation with diffusion models [55.62276027922499]
We propose a novel robust depth estimation method called D4RD.
It features a custom contrastive learning mode tailored for diffusion models to mitigate performance degradation in complex environments.
In experiments, D4RD surpasses existing state-of-the-art solutions on synthetic corruption datasets and real-world weather conditions.
arXiv Detail & Related papers (2024-04-15T14:29:47Z) - Adaptive Multi-modal Fusion of Spatially Variant Kernel Refinement with Diffusion Model for Blind Image Super-Resolution [23.34647832483913]
We introduce a framework known as Adaptive Multi-modal Fusion of textbfSpatially Variant Kernel Refinement with Diffusion Model for Blind Image textbfSuper-textbfResolution.
We also introduce the Adaptive Multi-Modal Fusion (AMF) module to align the information from three modalities: low-resolution images, depth maps, and blur kernels.
arXiv Detail & Related papers (2024-03-09T06:01:25Z) - Improving the Stability and Efficiency of Diffusion Models for Content Consistent Super-Resolution [18.71638301931374]
generative priors of pre-trained latent diffusion models (DMs) have demonstrated great potential to enhance the visual quality of image super-resolution (SR) results.
We propose to partition the generative SR process into two stages, where the DM is employed for reconstructing image structures and the GAN is employed for improving fine-grained details.
Once trained, our proposed method, namely content consistent super-resolution (CCSR),allows flexible use of different diffusion steps in the inference stage without re-training.
arXiv Detail & Related papers (2023-12-30T10:22:59Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.
We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - ACDMSR: Accelerated Conditional Diffusion Models for Single Image
Super-Resolution [84.73658185158222]
We propose a diffusion model-based super-resolution method called ACDMSR.
Our method adapts the standard diffusion model to perform super-resolution through a deterministic iterative denoising process.
Our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.
arXiv Detail & Related papers (2023-07-03T06:49:04Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.