How Over-Parameterization Slows Down Gradient Descent in Matrix Sensing:
The Curses of Symmetry and Initialization
- URL: http://arxiv.org/abs/2310.01769v3
- Date: Fri, 24 Nov 2023 18:08:25 GMT
- Title: How Over-Parameterization Slows Down Gradient Descent in Matrix Sensing:
The Curses of Symmetry and Initialization
- Authors: Nuoya Xiong, Lijun Ding, Simon S. Du
- Abstract summary: We show how over- parameterization changes the convergence behaviors of descent.
The goal is to recover an unknown low-rank ground-rank ground-truth matrix from near-isotropic linear measurements.
We propose a novel method that only modifies one step of GD and obtains a convergence rate independent of $alpha$.
- Score: 46.55524654398093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper rigorously shows how over-parameterization changes the convergence
behaviors of gradient descent (GD) for the matrix sensing problem, where the
goal is to recover an unknown low-rank ground-truth matrix from near-isotropic
linear measurements. First, we consider the symmetric setting with the
symmetric parameterization where $M^* \in \mathbb{R}^{n \times n}$ is a
positive semi-definite unknown matrix of rank $r \ll n$, and one uses a
symmetric parameterization $XX^\top$ to learn $M^*$. Here $X \in \mathbb{R}^{n
\times k}$ with $k > r$ is the factor matrix. We give a novel $\Omega (1/T^2)$
lower bound of randomly initialized GD for the over-parameterized case ($k >r$)
where $T$ is the number of iterations. This is in stark contrast to the
exact-parameterization scenario ($k=r$) where the convergence rate is $\exp
(-\Omega (T))$. Next, we study asymmetric setting where $M^* \in
\mathbb{R}^{n_1 \times n_2}$ is the unknown matrix of rank $r \ll
\min\{n_1,n_2\}$, and one uses an asymmetric parameterization $FG^\top$ to
learn $M^*$ where $F \in \mathbb{R}^{n_1 \times k}$ and $G \in \mathbb{R}^{n_2
\times k}$. Building on prior work, we give a global exact convergence result
of randomly initialized GD for the exact-parameterization case ($k=r$) with an
$\exp (-\Omega(T))$ rate. Furthermore, we give the first global exact
convergence result for the over-parameterization case ($k>r$) with an
$\exp(-\Omega(\alpha^2 T))$ rate where $\alpha$ is the initialization scale.
This linear convergence result in the over-parameterization case is especially
significant because one can apply the asymmetric parameterization to the
symmetric setting to speed up from $\Omega (1/T^2)$ to linear convergence. On
the other hand, we propose a novel method that only modifies one step of GD and
obtains a convergence rate independent of $\alpha$, recovering the rate in the
exact-parameterization case.
Related papers
- The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Convergence of Gradient Descent with Small Initialization for
Unregularized Matrix Completion [21.846732043706318]
We show that the vanilla gradient descent provably converges to the ground truth $rmXstar$ without requiring any explicit regularization.
Surprisingly, neither the convergence rate nor the final accuracy depends on the over- parameterized search rank $r'$, and they are only governed by the true rank $r$.
arXiv Detail & Related papers (2024-02-09T19:39:23Z) - On the $O(\frac{\sqrt{d}}{T^{1/4}})$ Convergence Rate of RMSProp and Its Momentum Extension Measured by $\ell_1$ Norm [59.65871549878937]
This paper considers the RMSProp and its momentum extension and establishes the convergence rate of $frac1Tsum_k=1T.
Our convergence rate matches the lower bound with respect to all the coefficients except the dimension $d$.
Our convergence rate can be considered to be analogous to the $frac1Tsum_k=1T.
arXiv Detail & Related papers (2024-02-01T07:21:32Z) - Solving Dense Linear Systems Faster Than via Preconditioning [1.8854491183340518]
We show that our algorithm has an $tilde O(n2)$ when $k=O(n0.729)$.
In particular, our algorithm has an $tilde O(n2)$ when $k=O(n0.729)$.
Our main algorithm can be viewed as a randomized block coordinate descent method.
arXiv Detail & Related papers (2023-12-14T12:53:34Z) - Convergence of Alternating Gradient Descent for Matrix Factorization [5.439020425819001]
We consider alternating gradient descent (AGD) with fixed step size applied to the asymmetric matrix factorization objective.
We show that, for a rank-$r$ matrix $mathbfA in mathbbRm times n$, smoothness $C$ in the complexity $T$ to be an absolute constant.
arXiv Detail & Related papers (2023-05-11T16:07:47Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Global Convergence of Gradient Descent for Asymmetric Low-Rank Matrix
Factorization [49.090785356633695]
We study the asymmetric low-rank factorization problem: [mathbfU in mathbbRm min d, mathbfU$ and mathV$.
arXiv Detail & Related papers (2021-06-27T17:25:24Z) - Convergence Rate of the (1+1)-Evolution Strategy with Success-Based
Step-Size Adaptation on Convex Quadratic Functions [20.666734673282498]
The (1+1)-evolution strategy (ES) with success-based step-size adaptation is analyzed on a general convex quadratic function.
The convergence rate of the (1+1)-ES is derived explicitly and rigorously on a general convex quadratic function.
arXiv Detail & Related papers (2021-03-02T09:03:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.