Optimal Low-Rank Matrix Completion: Semidefinite Relaxations and
Eigenvector Disjunctions
- URL: http://arxiv.org/abs/2305.12292v2
- Date: Fri, 26 Jan 2024 17:34:25 GMT
- Title: Optimal Low-Rank Matrix Completion: Semidefinite Relaxations and
Eigenvector Disjunctions
- Authors: Dimitris Bertsimas, Ryan Cory-Wright, Sean Lo, and Jean Pauphilet
- Abstract summary: Low-rank matrix completion consists of a matrix of minimal complexity that recovers a given set of observations as accurately as possible.
New convex relaxations decrease the optimal by magnitude compared to existing methods.
- Score: 6.537257913467247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-rank matrix completion consists of computing a matrix of minimal
complexity that recovers a given set of observations as accurately as possible.
Unfortunately, existing methods for matrix completion are heuristics that,
while highly scalable and often identifying high-quality solutions, do not
possess any optimality guarantees. We reexamine matrix completion with an
optimality-oriented eye. We reformulate these low-rank problems as convex
problems over the non-convex set of projection matrices and implement a
disjunctive branch-and-bound scheme that solves them to certifiable optimality.
Further, we derive a novel and often tight class of convex relaxations by
decomposing a low-rank matrix as a sum of rank-one matrices and incentivizing
that two-by-two minors in each rank-one matrix have determinant zero. In
numerical experiments, our new convex relaxations decrease the optimality gap
by two orders of magnitude compared to existing attempts, and our disjunctive
branch-and-bound scheme solves nxn rank-r matrix completion problems to
certifiable optimality in hours for n<=150 and r<=5.
Related papers
- One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Low-Rank Mirror-Prox for Nonsmooth and Low-Rank Matrix Optimization
Problems [19.930021245647705]
Low-rank and nonsmooth matrix optimization problems capture many fundamental tasks in statistics and machine learning.
In this paper we consider standard convex relaxations for such problems.
We prove that under a textitstrict complementarity condition and under the relatively mild assumption that the nonsmooth objective can be written as a maximum of smooth functions, approximated variants of two popular textitmirror-prox methods.
arXiv Detail & Related papers (2022-06-23T08:10:54Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Nonlinear matrix recovery using optimization on the Grassmann manifold [18.655422834567577]
We investigate the problem of recovering a partially observed high-rank clustering matrix whose columns obey a nonlinear structure such as a union of subspaces.
We show that the alternating limit converges to a unique point using the Kurdyka-Lojasi property.
arXiv Detail & Related papers (2021-09-13T16:13:13Z) - Mixed-Projection Conic Optimization: A New Paradigm for Modeling Rank
Constraints [3.179831861897336]
We provide a framework for solving low-rank optimization problems to certifiable optimality.
Our framework also provides near-optimal solutions through rounding and local search techniques.
arXiv Detail & Related papers (2020-09-22T08:59:06Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Solving the Robust Matrix Completion Problem via a System of Nonlinear
Equations [28.83358353043287]
We consider the problem of robust matrix completion, which aims to recover a low rank matrix $L_*$ and a sparse matrix $S_*$ from incomplete observations of their sum $M=L_*+S_*inmathbbRmtimes n$.
The algorithm is highly parallelizable and suitable for large scale problems.
Numerical simulations show that the simple method works as expected and is comparable with state-of-the-art methods.
arXiv Detail & Related papers (2020-03-24T17:28:15Z) - Relative Error Bound Analysis for Nuclear Norm Regularized Matrix Completion [101.83262280224729]
We develop a relative error bound for nuclear norm regularized matrix completion.
We derive a relative upper bound for recovering the best low-rank approximation of the unknown matrix.
arXiv Detail & Related papers (2015-04-26T13:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.