Diffusion Posterior Sampling for Super-Resolution under Gaussian Measurement Noise
- URL: http://arxiv.org/abs/2512.21797v1
- Date: Thu, 25 Dec 2025 22:22:53 GMT
- Title: Diffusion Posterior Sampling for Super-Resolution under Gaussian Measurement Noise
- Authors: Abu Hanif Muhammad Syarubany,
- Abstract summary: This report studies diffusion posterior sampling (DPS) for single-image super-resolution (SISR)<n>We implement a likelihood-guided sampling procedure that combines an unconditional diffusion prior with gradient-based conditioning.<n>We evaluate posterior sampling (PS) conditioning across guidance scales and noise levels.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This report studies diffusion posterior sampling (DPS) for single-image super-resolution (SISR) under a known degradation model. We implement a likelihood-guided sampling procedure that combines an unconditional diffusion prior with gradient-based conditioning to enforce measurement consistency for $4\times$ super-resolution with additive Gaussian noise. We evaluate posterior sampling (PS) conditioning across guidance scales and noise levels, using PSNR and SSIM as fidelity metrics and a combined selection score $(\mathrm{PSNR}/40)+\mathrm{SSIM}$. Our ablation shows that moderate guidance improves reconstruction quality, with the best configuration achieved at PS scale $0.95$ and noise standard deviation $σ=0.01$ (score $1.45231$). Qualitative results confirm that the selected PS setting restores sharper edges and more coherent facial details compared to the downsampled inputs, while alternative conditioning strategies (e.g., MCG and PS-annealed) exhibit different texture fidelity trade-offs. These findings highlight the importance of balancing diffusion priors and measurement-gradient strength to obtain stable, high-quality reconstructions without retraining the diffusion model for each operator.
Related papers
- Stabilizing Diffusion Posterior Sampling by Noise--Frequency Continuation [52.736416985173776]
At high noise, data-consistency gradients computed from inaccurate estimates can be geometrically incongruent with the posterior geometry.<n>We propose a noise--frequency Continuation framework that constructs a continuous family of intermediate posteriors whose likelihood enforces measurement consistency only within a noise-dependent frequency band.<n>Our method achieves state-of-the-art performance and improves motion deblurring PSNR by up to 5 dB over strong baselines.
arXiv Detail & Related papers (2026-01-30T03:14:01Z) - Blind Ultrasound Image Enhancement via Self-Supervised Physics-Guided Degradation Modeling [4.619828919345113]
Supervised enhancement methods assume access to clean targets or known degradations.<n>We present a blind, self-supervised enhancement framework that jointly deconvolves and denoises B-mode images.
arXiv Detail & Related papers (2026-01-29T15:28:25Z) - Rethinking Refinement: Correcting Generative Bias without Noise Injection [7.28668585578288]
Generative models, including diffusion and flow-based models, often exhibit systematic biases that degrade sample quality.<n>We show that effective bias correction can be achieved as a post-hoc procedure, without noise injection or multi-step resampling.
arXiv Detail & Related papers (2026-01-29T02:34:08Z) - Robust Posterior Diffusion-based Sampling via Adaptive Guidance Scale [39.27744518020771]
We propose an adaptive likelihood step-size strategy to guide the diffusion process for inverse-problem formulations.<n>The resulting approach, Adaptive Posterior diffusion Sampling (AdaPS), is hyper-free and improves reconstruction quality across diverse imaging tasks.
arXiv Detail & Related papers (2025-11-23T14:37:59Z) - G$^2$RPO: Granular GRPO for Precise Reward in Flow Models [74.21206048155669]
We propose a novel Granular-GRPO (G$2$RPO) framework that achieves precise and comprehensive reward assessments of sampling directions.<n>We introduce a Multi-Granularity Advantage Integration module that aggregates advantages computed at multiple diffusion scales.<n>Our G$2$RPO significantly outperforms existing flow-based GRPO baselines.
arXiv Detail & Related papers (2025-10-02T12:57:12Z) - Automated Tuning for Diffusion Inverse Problem Solvers without Generative Prior Retraining [4.511561231517167]
Diffusion/score-based models have emerged as powerful generative priors for solving inverse problems.<n>We propose Zero-shot Adaptive Diffusion Sampling (ZADS), a test-time optimization method that tunes fidelity weights across arbitrary noise schedules.<n>ZADS consistently outperforms both traditional compressed sensing and recent diffusion-based methods.
arXiv Detail & Related papers (2025-09-11T22:22:32Z) - Diffusion Models for Solving Inverse Problems via Posterior Sampling with Piecewise Guidance [52.705112811734566]
A novel diffusion-based framework is introduced for solving inverse problems using a piecewise guidance scheme.<n>The proposed method is problem-agnostic and readily adaptable to a variety of inverse problems.<n>The framework achieves a reduction in inference time of (25%) for inpainting with both random and center masks, and (23%) and (24%) for (4times) and (8times) super-resolution tasks.
arXiv Detail & Related papers (2025-07-22T19:35:14Z) - DC-Solver: Improving Predictor-Corrector Diffusion Sampler via Dynamic Compensation [68.55191764622525]
Diffusion models (DPMs) have shown remarkable performance in visual synthesis but are computationally expensive due to the need for multiple evaluations during the sampling.
Recent predictor synthesis-or diffusion samplers have significantly reduced the required number of evaluations, but inherently suffer from a misalignment issue.
We introduce a new fast DPM sampler called DC-CPRr, which leverages dynamic compensation to mitigate the misalignment.
arXiv Detail & Related papers (2024-09-05T17:59:46Z) - Diffusion Posterior Proximal Sampling for Image Restoration [27.35952624032734]
We present a refined paradigm for diffusion-based image restoration.
Specifically, we opt for a sample consistent with the measurement identity at each generative step.
The number of candidate samples used for selection is adaptively determined based on the signal-to-noise ratio of the timestep.
arXiv Detail & Related papers (2024-02-25T04:24:28Z) - Solving Diffusion ODEs with Optimal Boundary Conditions for Better Image Super-Resolution [82.50210340928173]
randomness of diffusion models results in ineffectiveness and instability, making it challenging for users to guarantee the quality of SR results.
We propose a plug-and-play sampling method that owns the potential to benefit a series of diffusion-based SR methods.
The quality of SR results sampled by the proposed method with fewer steps outperforms the quality of results sampled by current methods with randomness from the same pre-trained diffusion-based SR model.
arXiv Detail & Related papers (2023-05-24T17:09:54Z) - Preconditioned Score-based Generative Models [45.66744783988319]
An intuitive acceleration method is to reduce the sampling iterations which however causes severe performance degradation.<n>We propose a novel preconditioned diffusion sampling (PDS) method that leverages matrix preconditioning to alleviate the aforementioned problem.<n>PDS preserves the output distribution of the SGM, with no risk of inducing systematical bias to the original sampling process.
arXiv Detail & Related papers (2023-02-13T16:30:53Z) - Diffusion Model Based Posterior Sampling for Noisy Linear Inverse Problems [14.809545109705256]
This paper presents a fast and effective solution by proposing a simple closed-form approximation to the likelihood score.
For both diffusion and flow-based models, extensive experiments are conducted on various noisy linear inverse problems.
Our method demonstrates highly competitive or even better reconstruction performances while being significantly faster than all the baseline methods.
arXiv Detail & Related papers (2022-11-20T01:09:49Z) - Learning Energy-Based Models by Diffusion Recovery Likelihood [61.069760183331745]
We present a diffusion recovery likelihood method to tractably learn and sample from a sequence of energy-based models.
After training, synthesized images can be generated by the sampling process that initializes from Gaussian white noise distribution.
On unconditional CIFAR-10 our method achieves FID 9.58 and inception score 8.30, superior to the majority of GANs.
arXiv Detail & Related papers (2020-12-15T07:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.