Optimal Sample Complexity of Subgradient Descent for Amplitude Flow via
Non-Lipschitz Matrix Concentration
- URL: http://arxiv.org/abs/2011.00288v2
- Date: Fri, 27 Aug 2021 20:59:06 GMT
- Title: Optimal Sample Complexity of Subgradient Descent for Amplitude Flow via
Non-Lipschitz Matrix Concentration
- Authors: Paul Hand, Oscar Leong, Vladislav Voroninski
- Abstract summary: We consider the problem of recovering a real-valued $n$-dimensional signal from $m$ phaseless, linear measurements.
We establish local convergence of subgradient descent with optimal sample complexity based on the uniform concentration of a random, discontinuous matrix-valued operator.
- Score: 12.989855325491163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of recovering a real-valued $n$-dimensional signal
from $m$ phaseless, linear measurements and analyze the amplitude-based
non-smooth least squares objective. We establish local convergence of
subgradient descent with optimal sample complexity based on the uniform
concentration of a random, discontinuous matrix-valued operator arising from
the objective's gradient dynamics. While common techniques to establish uniform
concentration of random functions exploit Lipschitz continuity, we prove that
the discontinuous matrix-valued operator satisfies a uniform matrix
concentration inequality when the measurement vectors are Gaussian as soon as
$m = \Omega(n)$ with high probability. We then show that satisfaction of this
inequality is sufficient for subgradient descent with proper initialization to
converge linearly to the true solution up to the global sign ambiguity. As a
consequence, this guarantees local convergence for Gaussian measurements at
optimal sample complexity. The concentration methods in the present work have
previously been used to establish recovery guarantees for a variety of inverse
problems under generative neural network priors. This paper demonstrates the
applicability of these techniques to more traditional inverse problems and
serves as a pedagogical introduction to those results.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Symmetric Matrix Completion with ReLU Sampling [15.095194065320987]
We study the problem of symmetric positive semi-definite low-rank matrix completion (MC) with entry-dependent sampling.
In particular, we consider rectified linear unit (ReLU) sampling, where only stationary points are observed.
arXiv Detail & Related papers (2024-06-09T15:14:53Z) - Learning a Gaussian Mixture for Sparsity Regularization in Inverse
Problems [2.375943263571389]
In inverse problems, the incorporation of a sparsity prior yields a regularization effect on the solution.
We propose a probabilistic sparsity prior formulated as a mixture of Gaussians, capable of modeling sparsity with respect to a generic basis.
We put forth both a supervised and an unsupervised training strategy to estimate the parameters of this network.
arXiv Detail & Related papers (2024-01-29T22:52:57Z) - Robust Low-Rank Matrix Completion via a New Sparsity-Inducing
Regularizer [30.920908325825668]
This paper presents a novel loss function to as hybrid ordinary-Welsch (HOW) and a new sparsity-inducing matrix problem solver.
arXiv Detail & Related papers (2023-10-07T09:47:55Z) - Stochastic Mirror Descent for Large-Scale Sparse Recovery [13.500750042707407]
We discuss an application of quadratic Approximation to statistical estimation of high-dimensional sparse parameters.
We show that the proposed algorithm attains the optimal convergence of the estimation error under weak assumptions on the regressor distribution.
arXiv Detail & Related papers (2022-10-23T23:23:23Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Exact Linear Convergence Rate Analysis for Low-Rank Symmetric Matrix
Completion via Gradient Descent [22.851500417035947]
Factorization-based gradient descent is a scalable and efficient algorithm for solving the factorrank matrix completion.
We show that gradient descent enjoys fast convergence to estimate a solution of the global nature problem.
arXiv Detail & Related papers (2021-02-04T03:41:54Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z) - On Linear Stochastic Approximation: Fine-grained Polyak-Ruppert and
Non-Asymptotic Concentration [115.1954841020189]
We study the inequality and non-asymptotic properties of approximation procedures with Polyak-Ruppert averaging.
We prove a central limit theorem (CLT) for the averaged iterates with fixed step size and number of iterations going to infinity.
arXiv Detail & Related papers (2020-04-09T17:54:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.