Signal Recovery with Non-Expansive Generative Network Priors
- URL: http://arxiv.org/abs/2204.13599v1
- Date: Sun, 24 Apr 2022 18:47:32 GMT
- Title: Signal Recovery with Non-Expansive Generative Network Priors
- Authors: Jorio Cocola
- Abstract summary: We study compressive sensing with a deep generative network prior.
We prove that a signal in the range of a Gaussian generative network can be recovered from a few linear measurements.
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study compressive sensing with a deep generative network prior. Initial
theoretical guarantees for efficient recovery from compressed linear
measurements have been developed for signals in the range of a ReLU network
with Gaussian weights and logarithmic expansivity: that is when each layer is
larger than the previous one by a logarithmic factor. It was later shown that
constant expansivity is sufficient for recovery. It has remained open whether
the expansivity can be relaxed allowing for networks with contractive layers,
as often the case of real generators. In this work we answer this question,
proving that a signal in the range of a Gaussian generative network can be
recovered from a few linear measurements provided that the width of the layers
is proportional to the input layer size (up to log factors). This condition
allows the generative network to have contractive layers. Our result is based
on showing that Gaussian matrices satisfy a matrix concentration inequality,
which we term Range Restricted Weight Distribution Condition (R2WDC), and
weakens the Weight Distribution Condition (WDC) upon which previous theoretical
guarantees were based on. The WDC has also been used to analyze other signal
recovery problems with generative network priors. By replacing the WDC with the
R2WDC, we are able to extend previous results for signal recovery with
expansive generative network priors to non-expansive ones. We discuss these
extensions for phase retrieval, denoising, and spiked matrix recovery.
Related papers
- Feature Learning and Generalization in Deep Networks with Orthogonal Weights [1.7956122940209063]
Deep neural networks with numerically weights from independent Gaussian distributions can be tuned to criticality.
These networks still exhibit fluctuations that grow linearly with the depth of the network.
We show analytically that rectangular networks with tanh activations and weights from the ensemble of matrices have corresponding preactivation fluctuations.
arXiv Detail & Related papers (2023-10-11T18:00:02Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Bayesian Interpolation with Deep Linear Networks [92.1721532941863]
Characterizing how neural network depth, width, and dataset size jointly impact model quality is a central problem in deep learning theory.
We show that linear networks make provably optimal predictions at infinite depth.
We also show that with data-agnostic priors, Bayesian model evidence in wide linear networks is maximized at infinite depth.
arXiv Detail & Related papers (2022-12-29T20:57:46Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks [83.58049517083138]
We consider a two-layer ReLU network trained via gradient descent.
We show that SGD is biased towards a simple solution.
We also provide empirical evidence that knots at locations distinct from the data points might occur.
arXiv Detail & Related papers (2021-11-03T15:14:20Z) - Solving Inverse Problems with Conditional-GAN Prior via Fast
Network-Projected Gradient Descent [11.247580943940918]
In this work we investigate a network-based projected gradient descent (NPGD) algorithm for measurement-conditional generative models.
We show that the combination of measurement conditional model with NPGD works well in recovering the compressed signal while achieving similar or in some cases even better performance along with a much faster reconstruction.
arXiv Detail & Related papers (2021-09-02T17:28:05Z) - The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width
Limit at Initialization [18.613475245655806]
We study ReLU ResNets in the infinite-depth-and-width limit, where both depth and width tend to infinity as their ratio, $d/n$, remains constant.
Using Monte Carlo simulations, we demonstrate that even basic properties of standard ResNet architectures are poorly captured by the Gaussian limit.
arXiv Detail & Related papers (2021-06-07T23:47:37Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - Constant-Expansion Suffices for Compressed Sensing with Generative
Priors [26.41623833920794]
We prove a novel uniform concentration for random functions that might not beschitz but satisfy a relaxed notion of Lipe-theoreticalness.
Since the WDC is a fundamental concentration inequality inequality of all existing theoretical guarantees on this problem, our bound improvements in all known results in the heart on with priors, including one, low-bit recovery, and more.
arXiv Detail & Related papers (2020-06-07T19:14:41Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z) - A Neural Network Based on First Principles [13.554038901140949]
A Neural network is derived from first principles, assuming only that each layer begins with a linear dimension-reducing transformation.
The approach appeals to the principle of Maximum Entropy (MaxEnt) to find the posterior distribution of the input data of each layer.
arXiv Detail & Related papers (2020-02-18T10:16:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.