Posterior samples of source galaxies in strong gravitational lenses with
score-based priors
- URL: http://arxiv.org/abs/2211.03812v1
- Date: Mon, 7 Nov 2022 19:00:42 GMT
- Title: Posterior samples of source galaxies in strong gravitational lenses with
score-based priors
- Authors: Alexandre Adam, Adam Coogan, Nikolay Malkin, Ronan Legin, Laurence
Perreault-Levasseur, Yashar Hezaveh and Yoshua Bengio
- Abstract summary: We use a score-based model to encode the prior for the inference of undistorted images of background galaxies.
We show how the balance between the likelihood and the prior meet our expectations in an experiment with out-of-distribution data.
- Score: 107.52670032376555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inferring accurate posteriors for high-dimensional representations of the
brightness of gravitationally-lensed sources is a major challenge, in part due
to the difficulties of accurately quantifying the priors. Here, we report the
use of a score-based model to encode the prior for the inference of undistorted
images of background galaxies. This model is trained on a set of
high-resolution images of undistorted galaxies. By adding the likelihood score
to the prior score and using a reverse-time stochastic differential equation
solver, we obtain samples from the posterior. Our method produces independent
posterior samples and models the data almost down to the noise level. We show
how the balance between the likelihood and the prior meet our expectations in
an experiment with out-of-distribution data.
Related papers
- Diffusion Prior-Based Amortized Variational Inference for Noisy Inverse Problems [12.482127049881026]
We propose a novel approach to solve inverse problems with a diffusion prior from an amortized variational inference perspective.
Our amortized inference learns a function that directly maps measurements to the implicit posterior distributions of corresponding clean data, enabling a single-step posterior sampling even for unseen measurements.
arXiv Detail & Related papers (2024-07-23T02:14:18Z) - Posterior Sampling with Denoising Oracles via Tilted Transport [37.14320147233444]
We introduce the textittilted transport technique, which leverages the quadratic structure of the log-likelihood in linear inverse problems.
We quantify the conditions under which this boosted posterior is strongly log-concave, highlighting the dependencies on the condition number of the measurement matrix.
The resulting posterior sampling scheme is shown to reach the computational threshold predicted for sampling Ising models.
arXiv Detail & Related papers (2024-06-30T16:11:42Z) - Amortizing intractable inference in diffusion models for vision, language, and control [89.65631572949702]
This paper studies amortized sampling of the posterior over data, $mathbfxsim prm post(mathbfx)propto p(mathbfx)r(mathbfx)$, in a model that consists of a diffusion generative model prior $p(mathbfx)$ and a black-box constraint or function $r(mathbfx)$.
We prove the correctness of a data-free learning objective, relative trajectory balance, for training a diffusion model that samples from
arXiv Detail & Related papers (2024-05-31T16:18:46Z) - Uncertainty Visualization via Low-Dimensional Posterior Projections [23.371244861123827]
We introduce a new approach for estimating and visualizing posteriors by employing energy-based models (EBMs) over low-dimensional subspaces.
We demonstrate the effectiveness of our method across a diverse range of datasets and image restoration problems.
arXiv Detail & Related papers (2023-12-12T23:51:07Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Score-Based Diffusion Models as Principled Priors for Inverse Imaging [46.19536250098105]
We propose turning score-based diffusion models into principled image priors.
We show how to sample from resulting posteriors by using this probability function for variational inference.
arXiv Detail & Related papers (2023-04-23T21:05:59Z) - Instance-Optimal Compressed Sensing via Posterior Sampling [101.43899352984774]
We show for Gaussian measurements and emphany prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees.
We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.
arXiv Detail & Related papers (2021-06-21T22:51:56Z) - A Contrastive Learning Approach for Training Variational Autoencoder
Priors [137.62674958536712]
Variational autoencoders (VAEs) are one of the powerful likelihood-based generative models with applications in many domains.
One explanation for VAEs' poor generative quality is the prior hole problem: the prior distribution fails to match the aggregate approximate posterior.
We propose an energy-based prior defined by the product of a base prior distribution and a reweighting factor, designed to bring the base closer to the aggregate posterior.
arXiv Detail & Related papers (2020-10-06T17:59:02Z) - A deep-learning based Bayesian approach to seismic imaging and
uncertainty quantification [0.4588028371034407]
Uncertainty is essential when dealing with ill-conditioned inverse problems.
It is often not possible to formulate a prior distribution that precisely encodes our prior knowledge about the unknown.
We propose to use the functional form of a randomly convolutional neural network as an implicit structured prior.
arXiv Detail & Related papers (2020-01-13T23:46:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.