On the Sample Complexity and Optimization Landscape for Quadratic
Feasibility Problems
- URL: http://arxiv.org/abs/2002.01066v2
- Date: Mon, 14 Dec 2020 19:10:15 GMT
- Title: On the Sample Complexity and Optimization Landscape for Quadratic
Feasibility Problems
- Authors: Parth Thaker, Gautam Dasarathy, and Angelia Nedi\'c
- Abstract summary: We consider the problem of recovering a complex vector $mathbfxin mathbbCn$ from $mangle A-imathbfx, mathbfxr_i=1m arbitrary.
In general, not only is the the quadratic problem NP-hard to solve, but it may in fact be unidentifiable.
- Score: 7.722592882475735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of recovering a complex vector $\mathbf{x}\in
\mathbb{C}^n$ from $m$ quadratic measurements $\{\langle A_i\mathbf{x},
\mathbf{x}\rangle\}_{i=1}^m$. This problem, known as quadratic feasibility,
encompasses the well known phase retrieval problem and has applications in a
wide range of important areas including power system state estimation and x-ray
crystallography. In general, not only is the the quadratic feasibility problem
NP-hard to solve, but it may in fact be unidentifiable. In this paper, we
establish conditions under which this problem becomes {identifiable}, and
further prove isometry properties in the case when the matrices
$\{A_i\}_{i=1}^m$ are Hermitian matrices sampled from a complex Gaussian
distribution. Moreover, we explore a nonconvex {optimization} formulation of
this problem, and establish salient features of the associated optimization
landscape that enables gradient algorithms with an arbitrary initialization to
converge to a \emph{globally optimal} point with a high probability. Our
results also reveal sample complexity requirements for successfully identifying
a feasible solution in these contexts.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Stochastic First-Order Methods with Non-smooth and Non-Euclidean Proximal Terms for Nonconvex High-Dimensional Stochastic Optimization [2.0657831823662574]
When the non problem is by which the non problem is by whichity, the sample of first-order methods may depend linearly on the problem dimension, is for undesirable problems.
Our algorithms allow for the estimate of complexity using the distance of.
mathO (log d) / EuM4.
We prove that DISFOM can sharpen variance employing $mathO (log d) / EuM4.
arXiv Detail & Related papers (2024-06-27T18:38:42Z) - Checking the Sufficiently Scattered Condition using a Global Non-Convex
Optimization Software [16.556378176193032]
The sufficiently scattered condition (SSC) is a key condition in the identifiability of various matrix factorization problems.
We show that it can be checked in a reasonable amount of time in realistic scenarios.
arXiv Detail & Related papers (2024-02-08T19:41:38Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Randomized Coordinate Subgradient Method for Nonsmooth Composite
Optimization [11.017632675093628]
Coordinate-type subgradient methods for addressing nonsmooth problems are relatively underexplored due to the set of properties of the Lipschitz-type assumption.
arXiv Detail & Related papers (2022-06-30T02:17:11Z) - Efficient Algorithms for High-Dimensional Convex Subspace Optimization
via Strict Complementarity [19.24470467199451]
We consider optimization problems in which the goal is find a $k$ subspace of $realsn$, $k$, which minimizes a convex smooth loss.
While this problem is highly in high-dimensional regimes, it is difficult to find a global optimal solution.
In this paper we present a natural.
determinate optimal dimension relaxation for which convergence to the.
global is straightforward.
arXiv Detail & Related papers (2022-02-08T17:36:43Z) - Learning Linear Models Using Distributed Iterative Hessian Sketching [4.567810220723372]
We consider the problem of learning the Markov parameters of a linear system from observed data.
We show that a randomized and distributed Newton algorithm based on Hessian-sketching can produce $epsilon$-optimal solutions.
arXiv Detail & Related papers (2021-12-08T04:07:23Z) - Global Convergence of Gradient Descent for Asymmetric Low-Rank Matrix
Factorization [49.090785356633695]
We study the asymmetric low-rank factorization problem: [mathbfU in mathbbRm min d, mathbfU$ and mathV$.
arXiv Detail & Related papers (2021-06-27T17:25:24Z) - The limits of min-max optimization algorithms: convergence to spurious
non-critical sets [82.74514886461257]
min-max optimization algorithms encounter problems far greater because of the existence of periodic cycles and similar phenomena.
We show that there exist algorithms that do not attract any points of the problem.
We illustrate such challenges in simple to almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost
arXiv Detail & Related papers (2020-06-16T10:49:27Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.