A coherence parameter characterizing generative compressed sensing with
Fourier measurements
- URL: http://arxiv.org/abs/2207.09340v1
- Date: Tue, 19 Jul 2022 15:49:41 GMT
- Title: A coherence parameter characterizing generative compressed sensing with
Fourier measurements
- Authors: Aaron Berk, Simone Brugiapaglia, Babhru Joshi, Yaniv Plan, Matthew
Scott, \"Ozg\"ur Yilmaz
- Abstract summary: We prove the first known restricted isometry guarantee for generative compressed sensing with subsampled isometries.
Recovery efficacy is characterized by the coherence, a new parameter, which measures the interplay between the range of the network and the measurement matrix.
- Score: 8.106641866299379
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Bora et al. (2017), a mathematical framework was developed for compressed
sensing guarantees in the setting where the measurement matrix is Gaussian and
the signal structure is the range of a generative neural network (GNN). The
problem of compressed sensing with GNNs has since been extensively analyzed
when the measurement matrix and/or network weights follow a subgaussian
distribution. We move beyond the subgaussian assumption, to measurement
matrices that are derived by sampling uniformly at random rows of a unitary
matrix (including subsampled Fourier measurements as a special case).
Specifically, we prove the first known restricted isometry guarantee for
generative compressed sensing with subsampled isometries, and provide recovery
bounds with nearly order-optimal sample complexity, addressing an open problem
of Scarlett et al. (2022, p. 10). Recovery efficacy is characterized by the
coherence, a new parameter, which measures the interplay between the range of
the network and the measurement matrix. Our approach relies on subspace
counting arguments and ideas central to high-dimensional probability.
Furthermore, we propose a regularization strategy for training GNNs to have
favourable coherence with the measurement operator. We provide compelling
numerical simulations that support this regularized training strategy: our
strategy yields low coherence networks that require fewer measurements for
signal recovery. This, together with our theoretical results, supports
coherence as a natural quantity for characterizing generative compressed
sensing with subsampled isometries.
Related papers
- Weighted Riesz Particles [0.0]
We consider the target distribution as a mapping where the infinite-dimensional space of the parameters consists of a number of deterministic submanifolds.
We study the properties of the point, called Riesz, and embed it into sequential MCMC.
We find that there will be higher acceptance rates with fewer evaluations.
arXiv Detail & Related papers (2023-12-01T14:36:46Z) - Multi-Irreducible Spectral Synchronization for Robust Rotation Averaging [1.2289361708127877]
We show how to estimate a set of unknown orientations $R_1,..., R_N in SO$, given noisy measurements.
Results indicate how to devise measurement networks that are emph guaranteed to achieve accurate estimation.
arXiv Detail & Related papers (2023-11-28T06:25:26Z) - Adaptive Log-Euclidean Metrics for SPD Matrix Learning [73.12655932115881]
We propose Adaptive Log-Euclidean Metrics (ALEMs), which extend the widely used Log-Euclidean Metric (LEM)
The experimental and theoretical results demonstrate the merit of the proposed metrics in improving the performance of SPD neural networks.
arXiv Detail & Related papers (2023-03-26T18:31:52Z) - Scalable Stochastic Gradient Riemannian Langevin Dynamics in Non-Diagonal Metrics [3.8811062755861956]
We propose two non-diagonal metrics that can be used in-gradient samplers to improve convergence and exploration.
We show that for fully connected neural networks (NNs) with sparsity-inducing priors and convolutional NNs with correlated priors, using these metrics can provide improvements.
arXiv Detail & Related papers (2023-03-09T08:20:28Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Entanglement estimation in tensor network states via sampling [0.688204255655161]
We introduce a method for extracting meaningful entanglement measures of tensor network states in general dimensions.
We test our method on the one-dimensional critical XX chain and the two-dimensional toric code in a checkerboard geometry.
arXiv Detail & Related papers (2022-02-08T19:00:03Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Learning a Compressive Sensing Matrix with Structural Constraints via
Maximum Mean Discrepancy Optimization [17.104994036477308]
We introduce a learning-based algorithm to obtain a measurement matrix for compressive sensing related recovery problems.
Recent success of such metrics in neural network related topics motivate a solution of the problem based on machine learning.
arXiv Detail & Related papers (2021-10-14T08:35:54Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Robust Compressed Sensing using Generative Models [98.64228459705859]
In this paper we propose an algorithm inspired by the Median-of-Means (MOM)
Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers.
arXiv Detail & Related papers (2020-06-16T19:07:41Z) - Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable
Embeddings with Generative Priors [52.06292503723978]
Motivated by advances in compressive sensing with generative models, we study the problem of 1-bit compressive sensing with generative models.
We first consider noiseless 1-bit measurements, and provide sample complexity bounds for approximate recovery under i.i.d.Gaussian measurements.
We demonstrate that the Binary $epsilon$-Stable Embedding property, which characterizes the robustness of the reconstruction to measurement errors and noise, also holds for 1-bit compressive sensing with Lipschitz continuous generative models.
arXiv Detail & Related papers (2020-02-05T09:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.