Learning Single Index Models with Diffusion Priors
- URL: http://arxiv.org/abs/2505.21135v1
- Date: Tue, 27 May 2025 12:50:04 GMT
- Title: Learning Single Index Models with Diffusion Priors
- Authors: Anqi Tang, Youming Chen, Shuchen Xue, Zhaoqiang Liu,
- Abstract summary: Diffusion models (DMs) have demonstrated remarkable ability to generate diverse and high-quality images.<n>We propose an efficient reconstruction method that only requires one round of unconditional sampling and (partial) inversion of DMs.
- Score: 11.399577852929502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models (DMs) have demonstrated remarkable ability to generate diverse and high-quality images by efficiently modeling complex data distributions. They have also been explored as powerful generative priors for signal recovery, resulting in a substantial improvement in the quality of reconstructed signals. However, existing research on signal recovery with diffusion models either focuses on specific reconstruction problems or is unable to handle nonlinear measurement models with discontinuous or unknown link functions. In this work, we focus on using DMs to achieve accurate recovery from semi-parametric single index models, which encompass a variety of popular nonlinear models that may have {\em discontinuous} and {\em unknown} link functions. We propose an efficient reconstruction method that only requires one round of unconditional sampling and (partial) inversion of DMs. Theoretical analysis on the effectiveness of the proposed methods has been established under appropriate conditions. We perform numerical experiments on image datasets for different nonlinear measurement models. We observe that compared to competing methods, our approach can yield more accurate reconstructions while utilizing significantly fewer neural function evaluations.
Related papers
- Solving Inverse Problems with FLAIR [59.02385492199431]
Flow-based latent generative models are able to generate images with remarkable quality, even enabling text-to-image generation.<n>We present FLAIR, a novel training free variational framework that leverages flow-based generative models as a prior for inverse problems.<n>Results on standard imaging benchmarks demonstrate that FLAIR consistently outperforms existing diffusion- and flow-based methods in terms of reconstruction quality and sample diversity.
arXiv Detail & Related papers (2025-06-03T09:29:47Z) - Bayesian Model Parameter Learning in Linear Inverse Problems: Application in EEG Focal Source Imaging [49.1574468325115]
Inverse problems can be described as limited-data problems in which the signal of interest cannot be observed directly.<n>We studied a linear inverse problem that included an unknown non-linear model parameter.<n>We utilized a Bayesian model-based learning approach that allowed signal recovery and subsequently estimation of the model parameter.
arXiv Detail & Related papers (2025-01-07T18:14:24Z) - NeurAM: nonlinear dimensionality reduction for uncertainty quantification through neural active manifolds [0.6990493129893112]
We leverage autoencoders to discover a one-dimensional neural active manifold (NeurAM) capturing the model output variability.
We show how NeurAM can be used to obtain multifidelity sampling estimators with reduced variance.
arXiv Detail & Related papers (2024-08-07T04:27:58Z) - Amortized Posterior Sampling with Diffusion Prior Distillation [55.03585818289934]
Amortized Posterior Sampling is a novel variational inference approach for efficient posterior sampling in inverse problems.<n>Our method trains a conditional flow model to minimize the divergence between the variational distribution and the posterior distribution implicitly defined by the diffusion model.<n>Unlike existing methods, our approach is unsupervised, requires no paired training data, and is applicable to both Euclidean and non-Euclidean domains.
arXiv Detail & Related papers (2024-07-25T09:53:12Z) - Highly Accelerated MRI via Implicit Neural Representation Guided Posterior Sampling of Diffusion Models [2.5412006057370893]
Implicit neural representation (INR) has emerged as a powerful paradigm for solving inverse problems.
Our proposed framework can be a generalizable framework to solve inverse problems in other medical imaging tasks.
arXiv Detail & Related papers (2024-07-03T01:37:56Z) - Entropic Regression DMD (ERDMD) Discovers Informative Sparse and Nonuniformly Time Delayed Models [0.0]
We present a method which determines optimal multi-step dynamic mode decomposition models via entropic regression.
We develop a method that produces high fidelity time-delay DMD models that allow for nonuniform time space.
These models are shown to be highly efficient and robust.
arXiv Detail & Related papers (2024-06-17T20:02:43Z) - Solving Inverse Problems with Model Mismatch using Untrained Neural Networks within Model-based Architectures [14.551812310439004]
We introduce an untrained forward model residual block within the model-based architecture to match the data consistency in the measurement domain for each instance.
Our approach offers a unified solution that is less parameter-sensitive, requires no additional data, and enables simultaneous fitting of the forward model and reconstruction in a single pass.
arXiv Detail & Related papers (2024-03-07T19:02:13Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.