Semidefinite Programming versus Burer-Monteiro Factorization for Matrix
Sensing
- URL: http://arxiv.org/abs/2208.07469v1
- Date: Mon, 15 Aug 2022 23:21:16 GMT
- Title: Semidefinite Programming versus Burer-Monteiro Factorization for Matrix
Sensing
- Authors: Baturalp Yalcin, Ziye Ma, Javad Lavaei, Somayeh Sojoudi
- Abstract summary: Two main approaches for solving matrix sensing are based on semidefinite programming (SDP) and Burer-Monteiro (B-M) factorization.
In this paper, we shed light on some major differences between these two methods.
- Score: 17.149804263692452
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Many fundamental low-rank optimization problems, such as matrix completion,
phase synchronization/retrieval, power system state estimation, and robust PCA,
can be formulated as the matrix sensing problem. Two main approaches for
solving matrix sensing are based on semidefinite programming (SDP) and
Burer-Monteiro (B-M) factorization. The SDP method suffers from high
computational and space complexities, whereas the B-M method may return a
spurious solution due to the non-convexity of the problem. The existing
theoretical guarantees for the success of these methods have led to similar
conservative conditions, which may wrongly imply that these methods have
comparable performances. In this paper, we shed light on some major differences
between these two methods. First, we present a class of structured matrix
completion problems for which the B-M methods fail with an overwhelming
probability, while the SDP method works correctly. Second, we identify a class
of highly sparse matrix completion problems for which the B-M method works and
the SDP method fails. Third, we prove that although the B-M method exhibits the
same performance independent of the rank of the unknown solution, the success
of the SDP method is correlated to the rank of the solution and improves as the
rank increases. Unlike the existing literature that has mainly focused on those
instances of matrix sensing for which both SDP and B-M work, this paper offers
the first result on the unique merit of each method over the alternative
approach.
Related papers
- Tailed Low-Rank Matrix Factorization for Similarity Matrix Completion [14.542166904874147]
Similarity Completion Matrix serves as a fundamental tool at the core of numerous machinelearning tasks.
To address this issue, Similarity Matrix Theoretical (SMC) methods have been proposed, but they suffer complexity.
We present two novel, scalable, and effective algorithms, which investigate the PSD property to guide the estimation process and incorporate non low-rank regularizer to ensure the low-rank solution.
arXiv Detail & Related papers (2024-09-29T04:27:23Z) - Matrix Completion with Convex Optimization and Column Subset Selection [0.0]
We present two algorithms that implement our Columns Selected Matrix Completion (CSMC) method.
To study the influence of the matrix size, rank computation, and the proportion of missing elements on the quality of the solution and the time, we performed experiments on synthetic data.
Our thorough analysis shows that CSMC provides solutions of comparable quality to matrix completion algorithms, which are based on convex optimization.
arXiv Detail & Related papers (2024-03-04T10:36:06Z) - Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Asymmetric Scalable Cross-modal Hashing [51.309905690367835]
Cross-modal hashing is a successful method to solve large-scale multimedia retrieval issue.
We propose a novel Asymmetric Scalable Cross-Modal Hashing (ASCMH) to address these issues.
Our ASCMH outperforms the state-of-the-art cross-modal hashing methods in terms of accuracy and efficiency.
arXiv Detail & Related papers (2022-07-26T04:38:47Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Positive Semidefinite Matrix Factorization: A Connection with Phase
Retrieval and Affine Rank Minimization [71.57324258813674]
We show that PSDMF algorithms can be designed based on phase retrieval (PR) and affine rank minimization (ARM) algorithms.
Motivated by this idea, we introduce a new family of PSDMF algorithms based on iterative hard thresholding (IHT)
arXiv Detail & Related papers (2020-07-24T06:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.