Instance-Optimal Compressed Sensing via Posterior Sampling
- URL: http://arxiv.org/abs/2106.11438v1
- Date: Mon, 21 Jun 2021 22:51:56 GMT
- Title: Instance-Optimal Compressed Sensing via Posterior Sampling
- Authors: Ajil Jalal and Sushrut Karmalkar and Alexandros G. Dimakis and Eric
Price
- Abstract summary: We show for Gaussian measurements and emphany prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees.
We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.
- Score: 101.43899352984774
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We characterize the measurement complexity of compressed sensing of signals
drawn from a known prior distribution, even when the support of the prior is
the entire space (rather than, say, sparse vectors). We show for Gaussian
measurements and \emph{any} prior distribution on the signal, that the
posterior sampling estimator achieves near-optimal recovery guarantees.
Moreover, this result is robust to model mismatch, as long as the distribution
estimate (e.g., from an invertible generative model) is close to the true
distribution in Wasserstein distance. We implement the posterior sampling
estimator for deep generative priors using Langevin dynamics, and empirically
find that it produces accurate estimates with more diversity than MAP.
Related papers
- Quasi-Bayes meets Vines [2.3124143670964448]
We propose a different way to extend Quasi-Bayesian prediction to high dimensions through the use of Sklar's theorem.
We show that our proposed Quasi-Bayesian Vine (QB-Vine) is a fully non-parametric density estimator with emphan analytical form.
arXiv Detail & Related papers (2024-06-18T16:31:02Z) - On diffusion-based generative models and their error bounds: The log-concave case with full convergence estimates [5.13323375365494]
We provide theoretical guarantees for the convergence behaviour of diffusion-based generative models under strongly log-concave data.
Our class of functions used for score estimation is made of Lipschitz continuous functions avoiding any Lipschitzness assumption on the score function.
This approach yields the best known convergence rate for our sampling algorithm.
arXiv Detail & Related papers (2023-11-22T18:40:45Z) - Asymptotics of Bayesian Uncertainty Estimation in Random Features
Regression [1.170951597793276]
We focus on the variance of the posterior predictive distribution (Bayesian model average) and compare itss to that of the risk of the MAP estimator.
They also agree with each other when the number of samples grow faster than any constant multiple of model dimensions.
arXiv Detail & Related papers (2023-06-06T15:36:15Z) - Posterior samples of source galaxies in strong gravitational lenses with
score-based priors [107.52670032376555]
We use a score-based model to encode the prior for the inference of undistorted images of background galaxies.
We show how the balance between the likelihood and the prior meet our expectations in an experiment with out-of-distribution data.
arXiv Detail & Related papers (2022-11-07T19:00:42Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Sensing Cox Processes via Posterior Sampling and Positive Bases [56.82162768921196]
We study adaptive sensing of point processes, a widely used model from spatial statistics.
We model the intensity function as a sample from a truncated Gaussian process, represented in a specially constructed positive basis.
Our adaptive sensing algorithms use Langevin dynamics and are based on posterior sampling (textscCox-Thompson) and top-two posterior sampling (textscTop2) principles.
arXiv Detail & Related papers (2021-10-21T14:47:06Z) - Fast Approximation of the Sliced-Wasserstein Distance Using
Concentration of Random Projections [19.987683989865708]
The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications.
We propose a new perspective to approximate SW by making use of the concentration of measure phenomenon.
Our method does not require sampling a number of random projections, and is therefore both accurate and easy to use compared to the usual Monte Carlo approximation.
arXiv Detail & Related papers (2021-06-29T13:56:19Z) - Variance Reduction for Better Sampling in Continuous Domains [5.675136204504504]
We show that the optimal search distribution might be more peaked around the center of the distribution than the prior distribution.
We provide explicit values for this reshaping of the search distribution depending on the population size.
arXiv Detail & Related papers (2020-04-24T12:25:48Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z) - Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable
Embeddings with Generative Priors [52.06292503723978]
Motivated by advances in compressive sensing with generative models, we study the problem of 1-bit compressive sensing with generative models.
We first consider noiseless 1-bit measurements, and provide sample complexity bounds for approximate recovery under i.i.d.Gaussian measurements.
We demonstrate that the Binary $epsilon$-Stable Embedding property, which characterizes the robustness of the reconstruction to measurement errors and noise, also holds for 1-bit compressive sensing with Lipschitz continuous generative models.
arXiv Detail & Related papers (2020-02-05T09:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.