On the Optimality of Nuclear-norm-based Matrix Completion for Problems
with Smooth Non-linear Structure
- URL: http://arxiv.org/abs/2105.01874v1
- Date: Wed, 5 May 2021 05:34:32 GMT
- Title: On the Optimality of Nuclear-norm-based Matrix Completion for Problems
with Smooth Non-linear Structure
- Authors: Yunhua Xiang, Tianyu Zhang, Xu Wang, Ali Shojaie, Noah Simon
- Abstract summary: matrix completion has proven widely effective in many problems where there is no reason to assume low-dimensional linear structure in the underlying matrix.
We show that nuclear-norm penalization is still effective for recovering these matrices when observations are missing completely at random.
- Score: 19.069068837749885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Originally developed for imputing missing entries in low rank, or
approximately low rank matrices, matrix completion has proven widely effective
in many problems where there is no reason to assume low-dimensional linear
structure in the underlying matrix, as would be imposed by rank constraints. In
this manuscript, we build some theoretical intuition for this behavior. We
consider matrices which are not necessarily low-rank, but lie in a
low-dimensional non-linear manifold. We show that nuclear-norm penalization is
still effective for recovering these matrices when observations are missing
completely at random. In particular, we give upper bounds on the rate of
convergence as a function of the number of rows, columns, and observed entries
in the matrix, as well as the smoothness and dimension of the non-linear
embedding. We additionally give a minimax lower bound: This lower bound agrees
with our upper bound (up to a logarithmic factor), which shows that
nuclear-norm penalization is (up to log terms) minimax rate optimal for these
problems.
Related papers
- Spiky Rank and Its Applications to Rigidity and Circuits [0.0]
spiky rank is a new matrix parameter that enhances blocky rank by combining the structure of the latter with linear-algebraic flexibility.<n>We show that large spiky rank implies high matrix rigidity, and that spiky rank lower bounds yield lower bounds for depth-2 ReLU circuits.
arXiv Detail & Related papers (2026-02-26T21:20:00Z) - Computational Efficient Informative Nonignorable Matrix Completion: A Row- and Column-Wise Matrix U-Statistic Pseudo-Likelihood Approach [2.2306682526405868]
We establish a unified framework to deal with the high dimensional matrix completion problem.
We derive a row- and column-wise matrix U-statistics type loss function, with the nuclear norm for regularization.
A singular value proximal gradient algorithm is developed to solve the proposed optimization problem.
arXiv Detail & Related papers (2025-04-05T01:41:53Z) - The Structural Complexity of Matrix-Vector Multiplication [0.10485739694839669]
We show that the matrix-vector multiplication problem can be solved with $tildeO(n2)$ preprocessing and $tilde O(n2-1/d)$ query time.
Our results yield the first non-trivial upper bounds for many applications.
arXiv Detail & Related papers (2025-02-28T17:11:36Z) - Non-convex matrix sensing: Breaking the quadratic rank barrier in the sample complexity [11.412228884390784]
We study the problem of reconstructing a low-rank quadratic convex matrix from a few measurements.
We show that factorized gradient with spectral specificity converges to truth with the number of samples.
arXiv Detail & Related papers (2024-08-20T14:09:28Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - The Inductive Bias of Flatness Regularization for Deep Matrix
Factorization [58.851514333119255]
This work takes the first step toward understanding the inductive bias of the minimum trace of the Hessian solutions in deep linear networks.
We show that for all depth greater than one, with the standard Isometry Property (RIP) on the measurements, minimizing the trace of Hessian is approximately equivalent to minimizing the Schatten 1-norm of the corresponding end-to-end matrix parameters.
arXiv Detail & Related papers (2023-06-22T23:14:57Z) - Optimal Low-Rank Matrix Completion: Semidefinite Relaxations and
Eigenvector Disjunctions [6.537257913467247]
Low-rank matrix completion consists of a matrix of minimal complexity that recovers a given set of observations as accurately as possible.
New convex relaxations decrease the optimal by magnitude compared to existing methods.
arXiv Detail & Related papers (2023-05-20T22:04:34Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Compressed sensing of low-rank plus sparse matrices [3.8073142980733]
This manuscript develops similar guarantees showing that $mtimes n$ that can be expressed as the sum of a rank-rparse matrix and a $s-sparse matrix can be recovered by computationally tractable methods.
Results are shown for synthetic problems, dynamic-foreground/static separation, multispectral imaging, and Robust PCA.
arXiv Detail & Related papers (2020-07-18T15:36:11Z) - Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled
Gradient Descent [34.0533596121548]
Low-rank matrix estimation converges convex problem that finds numerous applications in signal processing, machine learning and imaging science.
We show that ScaledGD achieves the best of the best in terms of the number of the low-rank matrix.
Our analysis is also applicable to general loss that are similar to low-rank gradient descent.
arXiv Detail & Related papers (2020-05-18T17:17:16Z) - Relative Error Bound Analysis for Nuclear Norm Regularized Matrix Completion [101.83262280224729]
We develop a relative error bound for nuclear norm regularized matrix completion.
We derive a relative upper bound for recovering the best low-rank approximation of the unknown matrix.
arXiv Detail & Related papers (2015-04-26T13:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.