Low-Rank Matrix Estimation From Rank-One Projections by Unlifted Convex
Optimization
- URL: http://arxiv.org/abs/2004.02718v2
- Date: Sun, 10 Jan 2021 19:23:16 GMT
- Title: Low-Rank Matrix Estimation From Rank-One Projections by Unlifted Convex
Optimization
- Authors: Sohail Bahmani and Kiryung Lee
- Abstract summary: We study an estimator with a formulation convex for recovery of low-rank matrices from rank-one projections.
We show that under both models the estimator succeeds, with high probability, if the number of measurements exceeds $r2 (d+d_$2) up.
- Score: 9.492903649862761
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study an estimator with a convex formulation for recovery of low-rank
matrices from rank-one projections. Using initial estimates of the factors of
the target $d_1\times d_2$ matrix of rank-$r$, the estimator admits a practical
subgradient method operating in a space of dimension $r(d_1+d_2)$. This
property makes the estimator significantly more scalable than the convex
estimators based on lifting and semidefinite programming. Furthermore, we
present a streamlined analysis for exact recovery under the real Gaussian
measurement model, as well as the partially derandomized measurement model by
using the spherical $t$-design. We show that under both models the estimator
succeeds, with high probability, if the number of measurements exceeds $r^2
(d_1+d_2)$ up to some logarithmic factors. This sample complexity improves on
the existing results for nonconvex iterative algorithms.
Related papers
- Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Sparse PCA with Oracle Property [115.72363972222622]
We propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations.
We prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA.
arXiv Detail & Related papers (2023-12-28T02:52:54Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Retire: Robust Expectile Regression in High Dimensions [3.9391041278203978]
Penalized quantile and expectile regression methods offer useful tools to detect heteroscedasticity in high-dimensional data.
We propose and study (penalized) robust expectile regression (retire)
We show that the proposed procedure can be efficiently solved by a semismooth Newton coordinate descent algorithm.
arXiv Detail & Related papers (2022-12-11T18:03:12Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Efficient Minimax Optimal Estimators For Multivariate Convex Regression [1.583842747998493]
We present the first computationally efficient minimax optimal (up to logarithmic factors) estimators for the tasks of (i) $L$-Lipschitz convex regression (ii) $Gamma$-bounded convex regression undertopal support.
This work is the first to show the existence of efficient minimax optimal estimators for non-Donsker classes that their corresponding Least Squares Estimators are provably minimax sub-optimal.
arXiv Detail & Related papers (2022-05-06T17:04:05Z) - Support Recovery with Stochastic Gates: Theory and Application for
Linear Models [9.644417971611908]
We analyze the problem of simultaneous support recovery and estimation of the coefficient vector ($beta*$) in a linear model with independent and identically distributed Normal errors.
Considering design we show that under reasonable conditions on dimension and sparsity of $beta*$ the STG based estimator converges to the true data generating coefficient vector and also detects its support set with high probability.
arXiv Detail & Related papers (2021-10-29T17:59:43Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Robust Geodesic Regression [6.827783641211451]
We use M-type estimators, including the $L_1$, Huber and Tukey biweight estimators, to perform robust geodesic regression.
Results from numerical examples, including analysis of real neuroimaging data, demonstrate the promising empirical properties of the proposed approach.
arXiv Detail & Related papers (2020-07-09T02:41:32Z) - Nonparametric Estimation of the Fisher Information and Its Applications [82.00720226775964]
This paper considers the problem of estimation of the Fisher information for location from a random sample of size $n$.
An estimator proposed by Bhattacharya is revisited and improved convergence rates are derived.
A new estimator, termed a clipped estimator, is proposed.
arXiv Detail & Related papers (2020-05-07T17:21:56Z) - Communication-Efficient Distributed Estimator for Generalized Linear
Models with a Diverging Number of Covariates [7.427903819459701]
A novel method is proposed to obtain anally efficient estimator for large-scale distributed data by two rounds of communication.
In this novel method, the assumption on the number of servers is more relaxed and thus practical for real-world applications.
arXiv Detail & Related papers (2020-01-17T08:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.