The Generalized Lasso with Nonlinear Observations and Generative Priors
- URL: http://arxiv.org/abs/2006.12415v3
- Date: Thu, 8 Oct 2020 05:14:05 GMT
- Title: The Generalized Lasso with Nonlinear Observations and Generative Priors
- Authors: Zhaoqiang Liu, Jonathan Scarlett
- Abstract summary: We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models.
We show that our result can be extended to the uniform recovery guarantee under the assumption of a so-called local embedding property.
- Score: 63.541900026673055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the problem of signal estimation from noisy
non-linear measurements when the unknown $n$-dimensional signal is in the range
of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional
inputs. We make the assumption of sub-Gaussian measurements, which is satisfied
by a wide range of measurement models, such as linear, logistic, 1-bit, and
other quantized models. In addition, we consider the impact of adversarial
corruptions on these measurements. Our analysis is based on a generalized Lasso
approach (Plan and Vershynin, 2016). We first provide a non-uniform recovery
guarantee, which states that under i.i.d.~Gaussian measurements, roughly
$O\left(\frac{k}{\epsilon^2}\log L\right)$ samples suffice for recovery with an
$\ell_2$-error of $\epsilon$, and that this scheme is robust to adversarial
noise. Then, we apply this result to neural network generative models, and
discuss various extensions to other models and non-i.i.d.~measurements.
Moreover, we show that our result can be extended to the uniform recovery
guarantee under the assumption of a so-called local embedding property, which
is satisfied by the 1-bit and censored Tobit models.
Related papers
- von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Analysis of Bootstrap and Subsampling in High-dimensional Regularized Regression [29.57766164934947]
We investigate popular resampling methods for estimating the uncertainty of statistical models.
We provide a tight description of the biases and variances estimated by these methods in the context of generalized linear models.
arXiv Detail & Related papers (2024-02-21T08:50:33Z) - Theory of free fermions under random projective measurements [43.04146484262759]
We develop an analytical approach to the study of one-dimensional free fermions subject to random projective measurements of local site occupation numbers.
We derive a non-linear sigma model (NLSM) as an effective field theory of the problem.
arXiv Detail & Related papers (2023-04-06T15:19:33Z) - Off-the-grid prediction and testing for linear combination of translated features [2.774897240515734]
We consider a model where a signal (discrete or continuous) is observed with an additive Gaussian noise process.
We extend previous prediction results for off-the-grid estimators by taking into account that the scale parameter may vary.
We propose a procedure to test whether the features of the observed signal belong to a given finite collection.
arXiv Detail & Related papers (2022-12-02T13:48:45Z) - On the Identifiability and Estimation of Causal Location-Scale Noise
Models [122.65417012597754]
We study the class of location-scale or heteroscedastic noise models (LSNMs)
We show the causal direction is identifiable up to some pathological cases.
We propose two estimators for LSNMs: an estimator based on (non-linear) feature maps, and one based on neural networks.
arXiv Detail & Related papers (2022-10-13T17:18:59Z) - Non-Iterative Recovery from Nonlinear Observations using Generative
Models [14.772379476524407]
We assume that the signal lies in the range of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs.
Our reconstruction method is non-iterative (though approximating the projection step may use an iterative procedure) and highly efficient.
arXiv Detail & Related papers (2022-05-31T12:34:40Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable
Embeddings with Generative Priors [52.06292503723978]
Motivated by advances in compressive sensing with generative models, we study the problem of 1-bit compressive sensing with generative models.
We first consider noiseless 1-bit measurements, and provide sample complexity bounds for approximate recovery under i.i.d.Gaussian measurements.
We demonstrate that the Binary $epsilon$-Stable Embedding property, which characterizes the robustness of the reconstruction to measurement errors and noise, also holds for 1-bit compressive sensing with Lipschitz continuous generative models.
arXiv Detail & Related papers (2020-02-05T09:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.