Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors
- URL: http://arxiv.org/abs/2106.15358v1
- Date: Tue, 29 Jun 2021 12:49:54 GMT
- Title: Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors
- Authors: Zhaoqiang Liu, Subhroshekhar Ghosh, Jonathan Scarlett
- Abstract summary: We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
- Score: 59.33977545294148
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Compressive phase retrieval is a popular variant of the standard compressive
sensing problem, in which the measurements only contain magnitude information.
In this paper, motivated by recent advances in deep generative models, we
provide recovery guarantees with order-optimal sample complexity bounds for
phase retrieval with generative priors. We first show that when using i.i.d.
Gaussian measurements and an $L$-Lipschitz continuous generative model with
bounded $k$-dimensional inputs, roughly $O(k \log L)$ samples suffice to
guarantee that the signal is close to any vector that minimizes an
amplitude-based empirical loss function. Attaining this sample complexity with
a practical algorithm remains a difficult challenge, and a popular spectral
initialization method has been observed to pose a major bottleneck. To
partially address this, we further show that roughly $O(k \log L)$ samples
ensure sufficient closeness between the signal and any {\em globally optimal}
solution to an optimization problem designed for spectral initialization
(though finding such a solution may still be challenging). We adapt this result
to sparse phase retrieval, and show that $O(s \log n)$ samples are sufficient
for a similar guarantee when the underlying signal is $s$-sparse and
$n$-dimensional, matching an information-theoretic lower bound. While our
guarantees do not directly correspond to a practical algorithm, we propose a
practical spectral initialization method motivated by our findings, and
experimentally observe significant performance gains over various existing
spectral initialization methods of sparse phase retrieval.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Model-adapted Fourier sampling for generative compressed sensing [7.130302992490975]
We study generative compressed sensing when the measurement matrix is randomly subsampled from a unitary matrix.
We construct a model-adapted sampling strategy with an improved sample complexity of $textitO(kd| boldsymbolalpha|_22)$ measurements.
arXiv Detail & Related papers (2023-10-08T03:13:16Z) - Average case analysis of Lasso under ultra-sparse conditions [4.568911586155097]
We analyze the performance of the least absolute shrinkage and selection operator (Lasso) for the linear model when the number of regressors grows larger.
The obtained bound for perfect support recovery is a generalization of that given in previous literature.
arXiv Detail & Related papers (2023-02-25T14:50:32Z) - Adaptive Sketches for Robust Regression with Importance Sampling [64.75899469557272]
We introduce data structures for solving robust regression through gradient descent (SGD)
Our algorithm effectively runs $T$ steps of SGD with importance sampling while using sublinear space and just making a single pass over the data.
arXiv Detail & Related papers (2022-07-16T03:09:30Z) - Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free
Reinforcement Learning [52.76230802067506]
A novel model-free algorithm is proposed to minimize regret in episodic reinforcement learning.
The proposed algorithm employs an em early-settled reference update rule, with the aid of two Q-learning sequences.
The design principle of our early-settled variance reduction method might be of independent interest to other RL settings.
arXiv Detail & Related papers (2021-10-09T21:13:48Z) - Faster Differentially Private Samplers via R\'enyi Divergence Analysis
of Discretized Langevin MCMC [35.050135428062795]
Langevin dynamics-based algorithms offer much faster alternatives under some distance measures such as statistical distance.
Our techniques simple and generic and apply to underdamped Langevin dynamics.
arXiv Detail & Related papers (2020-10-27T22:52:45Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Hadamard Wirtinger Flow for Sparse Phase Retrieval [24.17778927729799]
We consider the problem of reconstructing an $n$-dimensional $k$-sparse signal from a set of noiseless magnitude-only measurements.
Formulating the problem as an unregularized empirical risk minimization task, we study the sample complexity performance of gradient descent with Hadamard parametrization.
We numerically investigate the performance of HWF at convergence and show that, while not requiring any explicit form of regularization nor knowledge of $k$, HWF adapts to the signal sparsity and reconstructs sparse signals with fewer measurements than existing gradient based methods.
arXiv Detail & Related papers (2020-06-01T16:41:27Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.