Self-diffusion for Solving Inverse Problems
- URL: http://arxiv.org/abs/2510.21417v1
- Date: Fri, 24 Oct 2025 12:57:22 GMT
- Title: Self-diffusion for Solving Inverse Problems
- Authors: Guanxiong Luo, Shoujin Huang, Yanlong Yang,
- Abstract summary: We propose self-diffusion, a novel framework for solving inverse problems without relying on pretrained generative models.<n>Self-diffusion exploits the spectral bias of neural networks and modulates it through a scheduled noise process.<n>We demonstrate the effectiveness of our approach on a variety of linear inverse problems, showing that self-diffusion achieves competitive or superior performance compared to other methods.
- Score: 3.8870795921263728
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose self-diffusion, a novel framework for solving inverse problems without relying on pretrained generative models. Traditional diffusion-based approaches require training a model on a clean dataset to learn to reverse the forward noising process. This model is then used to sample clean solutions -- corresponding to posterior sampling from a Bayesian perspective -- that are consistent with the observed data under a specific task. In contrast, self-diffusion introduces a self-contained iterative process that alternates between noising and denoising steps to progressively refine its estimate of the solution. At each step of self-diffusion, noise is added to the current estimate, and a self-denoiser, which is a single untrained convolutional network randomly initialized from scratch, is continuously trained for certain iterations via a data fidelity loss to predict the solution from the noisy estimate. Essentially, self-diffusion exploits the spectral bias of neural networks and modulates it through a scheduled noise process. Without relying on pretrained score functions or external denoisers, this approach still remains adaptive to arbitrary forward operators and noisy observations, making it highly flexible and broadly applicable. We demonstrate the effectiveness of our approach on a variety of linear inverse problems, showing that self-diffusion achieves competitive or superior performance compared to other methods.
Related papers
- Why Self-Training Helps and Hurts: Denoising vs. Signal Forgetting [6.369253528507392]
Iterative self-training repeatedly refits a model on pseudo-labels generated by its own predictions.<n>We derive deterministic-equivalent recursions for the prediction risk and effective noise across iterations.
arXiv Detail & Related papers (2026-02-15T07:28:12Z) - Search-Augmented Masked Diffusion Models for Constrained Generation [47.47350674873298]
Search-Augmented Masked Diffusion (SearchDiff) is a training-free neurosymbolic inference framework that integrates informed search directly into the reverse denoising process.<n>SearchDiff substantially improves constraint satisfaction and property adherence, while consistently outperforming discrete diffusion and autoregressive baselines.
arXiv Detail & Related papers (2026-02-02T19:43:25Z) - Combating Noisy Labels through Fostering Self- and Neighbor-Consistency [120.4394402099635]
Label noise is pervasive in various real-world scenarios, posing challenges in supervised deep learning.<n>We propose a noise-robust method named Jo-SNC (textbfJoint sample selection and model regularization based on textbfSelf- and textbfNeighbor-textbfConsistency)<n>We design a self-adaptive, data-driven thresholding scheme to adjust per-class selection thresholds.
arXiv Detail & Related papers (2026-01-19T07:55:29Z) - Beyond Fixed Horizons: A Theoretical Framework for Adaptive Denoising Diffusions [1.9116784879310031]
We introduce a new class of generative diffusion models that achieve a time-homogeneous structure for both the noising and denoising processes.<n>A key feature of the model is its adaptability to the target data, enabling a variety of downstream tasks using a pre-trained unconditional generative model.
arXiv Detail & Related papers (2025-01-31T18:23:27Z) - Amortized Posterior Sampling with Diffusion Prior Distillation [55.03585818289934]
Amortized Posterior Sampling is a novel variational inference approach for efficient posterior sampling in inverse problems.<n>Our method trains a conditional flow model to minimize the divergence between the variational distribution and the posterior distribution implicitly defined by the diffusion model.<n>Unlike existing methods, our approach is unsupervised, requires no paired training data, and is applicable to both Euclidean and non-Euclidean domains.
arXiv Detail & Related papers (2024-07-25T09:53:12Z) - ConsistencyDet: A Few-step Denoising Framework for Object Detection Using the Consistency Model [22.34776007498307]
We introduce a novel framework designed to articulate object detection as a denoising diffusion process.<n>The framework, termed textbfConsistencyDet, leverages an innovative denoising concept known as the Consistency Model.
arXiv Detail & Related papers (2024-04-11T14:08:45Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Inverse
Problems with Denoising Diffusion Restoration [64.8770356696056]
We propose GibbsDDRM, an extension of Denoising Diffusion Restoration Models (DDRM) to a blind setting in which the linear measurement operator is unknown.
The proposed method is problem-agnostic, meaning that a pre-trained diffusion model can be applied to various inverse problems without fine-tuning.
arXiv Detail & Related papers (2023-01-30T06:27:48Z) - Composing Normalizing Flows for Inverse Problems [89.06155049265641]
We propose a framework for approximate inference that estimates the target conditional as a composition of two flow models.
Our method is evaluated on a variety of inverse problems and is shown to produce high-quality samples with uncertainty.
arXiv Detail & Related papers (2020-02-26T19:01:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.