Automated Tuning for Diffusion Inverse Problem Solvers without Generative Prior Retraining
- URL: http://arxiv.org/abs/2509.09880v1
- Date: Thu, 11 Sep 2025 22:22:32 GMT
- Title: Automated Tuning for Diffusion Inverse Problem Solvers without Generative Prior Retraining
- Authors: Yaşar Utku Alçalar, Junno Yun, Mehmet Akçakaya,
- Abstract summary: Diffusion/score-based models have emerged as powerful generative priors for solving inverse problems.<n>We propose Zero-shot Adaptive Diffusion Sampling (ZADS), a test-time optimization method that tunes fidelity weights across arbitrary noise schedules.<n>ZADS consistently outperforms both traditional compressed sensing and recent diffusion-based methods.
- Score: 4.511561231517167
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Diffusion/score-based models have recently emerged as powerful generative priors for solving inverse problems, including accelerated MRI reconstruction. While their flexibility allows decoupling the measurement model from the learned prior, their performance heavily depends on carefully tuned data fidelity weights, especially under fast sampling schedules with few denoising steps. Existing approaches often rely on heuristics or fixed weights, which fail to generalize across varying measurement conditions and irregular timestep schedules. In this work, we propose Zero-shot Adaptive Diffusion Sampling (ZADS), a test-time optimization method that adaptively tunes fidelity weights across arbitrary noise schedules without requiring retraining of the diffusion prior. ZADS treats the denoising process as a fixed unrolled sampler and optimizes fidelity weights in a self-supervised manner using only undersampled measurements. Experiments on the fastMRI knee dataset demonstrate that ZADS consistently outperforms both traditional compressed sensing and recent diffusion-based methods, showcasing its ability to deliver high-fidelity reconstructions across varying noise schedules and acquisition settings.
Related papers
- Stabilizing Diffusion Posterior Sampling by Noise--Frequency Continuation [52.736416985173776]
At high noise, data-consistency gradients computed from inaccurate estimates can be geometrically incongruent with the posterior geometry.<n>We propose a noise--frequency Continuation framework that constructs a continuous family of intermediate posteriors whose likelihood enforces measurement consistency only within a noise-dependent frequency band.<n>Our method achieves state-of-the-art performance and improves motion deblurring PSNR by up to 5 dB over strong baselines.
arXiv Detail & Related papers (2026-01-30T03:14:01Z) - Robust Posterior Diffusion-based Sampling via Adaptive Guidance Scale [39.27744518020771]
We propose an adaptive likelihood step-size strategy to guide the diffusion process for inverse-problem formulations.<n>The resulting approach, Adaptive Posterior diffusion Sampling (AdaPS), is hyper-free and improves reconstruction quality across diverse imaging tasks.
arXiv Detail & Related papers (2025-11-23T14:37:59Z) - Adaptive Multimodal Protein Plug-and-Play with Diffusion-Based Priors [5.809784853115825]
In an inverse problem, the goal is to recover an unknown parameter that has typically undergone some lossy or noisy transformation during measurement.<n>Recently, deep generative models, particularly diffusion models, have emerged as powerful priors for protein structure generation.<n>We introduce Adam-, a Plug-and-Play framework that guides a pre-trained protein diffusion model using gradients from multiple, heterogeneous experimental sources.
arXiv Detail & Related papers (2025-07-28T18:28:03Z) - Noise Conditional Variational Score Distillation [60.38982038894823]
Noise Conditional Variational Score Distillation (NCVSD) is a novel method for distilling pretrained diffusion models into generative denoisers.<n>By integrating this insight into the Variational Score Distillation framework, we enable scalable learning of generative denoisers.
arXiv Detail & Related papers (2025-06-11T06:01:39Z) - A Minimalist Method for Fine-tuning Text-to-Image Diffusion Models [3.8623569699070357]
Noise PPO is a minimalist reinforcement learning algorithm that learns a prompt-conditioned initial noise generator.<n>Experiments show that Noise PPO consistently improves alignment and sample quality over the original model.<n>These findings reinforce the practical value of minimalist RL fine-tuning for diffusion models.
arXiv Detail & Related papers (2025-05-23T00:01:52Z) - Zero-Shot Adaptation for Approximate Posterior Sampling of Diffusion Models in Inverse Problems [2.8237889121096034]
We propose zero-shot approximate posterior sampling (ZAPS) to solve inverse problems in imaging.
ZAPS fixes the number of sampling steps, and uses zero-shot training with a physics-guided loss function to learn log-likelihood weights at each irregular timestep.
Our results show ZAPS reduces inference time, provides robustness to irregular noise schedules, and improves reconstruction quality.
arXiv Detail & Related papers (2024-07-16T00:09:37Z) - One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion
Schedule Flaws and Enhancing Low-Frequency Controls [77.42510898755037]
One More Step (OMS) is a compact network that incorporates an additional simple yet effective step during inference.
OMS elevates image fidelity and harmonizes the dichotomy between training and inference, while preserving original model parameters.
Once trained, various pre-trained diffusion models with the same latent domain can share the same OMS module.
arXiv Detail & Related papers (2023-11-27T12:02:42Z) - Unmasking Bias in Diffusion Model Training [40.90066994983719]
Denoising diffusion models have emerged as a dominant approach for image generation.
They still suffer from slow convergence in training and color shift issues in sampling.
In this paper, we identify that these obstacles can be largely attributed to bias and suboptimality inherent in the default training paradigm.
arXiv Detail & Related papers (2023-10-12T16:04:41Z) - SMRD: SURE-based Robust MRI Reconstruction with Diffusion Models [76.43625653814911]
Diffusion models have gained popularity for accelerated MRI reconstruction due to their high sample quality.
They can effectively serve as rich data priors while incorporating the forward model flexibly at inference time.
We introduce SURE-based MRI Reconstruction with Diffusion models (SMRD) to enhance robustness during testing.
arXiv Detail & Related papers (2023-10-03T05:05:35Z) - DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly Detection [80.20339155618612]
DiffusionAD is a novel anomaly detection pipeline comprising a reconstruction sub-network and a segmentation sub-network.<n>A rapid one-step denoising paradigm achieves hundreds of times acceleration while preserving comparable reconstruction quality.<n>Considering the diversity in the manifestation of anomalies, we propose a norm-guided paradigm to integrate the benefits of multiple noise scales.
arXiv Detail & Related papers (2023-03-15T16:14:06Z) - Diffusion Model Based Posterior Sampling for Noisy Linear Inverse Problems [14.809545109705256]
This paper presents a fast and effective solution by proposing a simple closed-form approximation to the likelihood score.
For both diffusion and flow-based models, extensive experiments are conducted on various noisy linear inverse problems.
Our method demonstrates highly competitive or even better reconstruction performances while being significantly faster than all the baseline methods.
arXiv Detail & Related papers (2022-11-20T01:09:49Z) - Diffusion Posterior Sampling for General Noisy Inverse Problems [50.873313752797124]
We extend diffusion solvers to handle noisy (non)linear inverse problems via approximation of the posterior sampling.
Our method demonstrates that diffusion models can incorporate various measurement noise statistics.
arXiv Detail & Related papers (2022-09-29T11:12:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.