Lifelong Matrix Completion with Sparsity-Number
- URL: http://arxiv.org/abs/2203.07637v2
- Date: Wed, 16 Mar 2022 00:51:44 GMT
- Title: Lifelong Matrix Completion with Sparsity-Number
- Authors: Ilqar Ramazanli
- Abstract summary: Matrix completion problem has been previously studied under various adaptive and passive settings.
In this paper, we are using the idea of sparsity-number and propose and single-phase column space recovery algorithm.
We show that these methods are as efficient as multi-phase matrix recovery algorithm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Matrix completion problem has been previously studied under various adaptive
and passive settings. Previously, researchers have proposed passive, two-phase
and single-phase algorithms using coherence parameter, and multi phase
algorithm using sparsity-number. It has been shown that the method using
sparsity-number reaching to theoretical lower bounds in many conditions.
However, the aforementioned method is running in many phases through the matrix
completion process, therefore it makes much more informative decision at each
stage. Hence, it is natural that the method outperforms previous algorithms. In
this paper, we are using the idea of sparsity-number and propose and
single-phase column space recovery algorithm which can be extended to two-phase
exact matrix completion algorithm. Moreover, we show that these methods are as
efficient as multi-phase matrix recovery algorithm. We provide experimental
evidence to illustrate the performance of our algorithm.
Related papers
- A fixed-point algorithm for matrix projections with applications in
quantum information [7.988085110283119]
We show that our algorithm converges exponentially fast to the optimal solution in the number of iterations.
We discuss several applications of our algorithm in quantum resource theories and quantum Shannon theory.
arXiv Detail & Related papers (2023-12-22T11:16:11Z) - Efficient Numerical Algorithm for Large-Scale Damped Natural Gradient
Descent [7.368877979221163]
We propose a new algorithm for efficiently solving the damped Fisher matrix in large-scale scenarios where the number of parameters significantly exceeds the number of available samples.
Our algorithm is based on Cholesky decomposition and is generally applicable. Benchmark results show that the algorithm is significantly faster than existing methods.
arXiv Detail & Related papers (2023-10-26T16:46:13Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - Survey of Matrix Completion Algorithms [3.42658286826597]
Matrix completion problem has been investigated under many different conditions since Netflix announced the Netflix Prize problem.
We are going to visit some of the matrix completion methods, mainly in the direction of passive and adaptive directions.
arXiv Detail & Related papers (2022-04-01T05:44:47Z) - Fast Differentiable Matrix Square Root and Inverse Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root and the inverse square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Pad'e Approximants (MPA)
A series of numerical tests show that both methods yield considerable speed-up compared with the SVD or the NS iteration.
arXiv Detail & Related papers (2022-01-29T10:00:35Z) - Fast Differentiable Matrix Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP)
The other method is to use Matrix Pad'e Approximants (MPA)
arXiv Detail & Related papers (2022-01-21T12:18:06Z) - A Fully Single Loop Algorithm for Bilevel Optimization without Hessian
Inverse [121.54116938140754]
We propose a new Hessian inverse free Fully Single Loop Algorithm for bilevel optimization problems.
We show that our algorithm converges with the rate of $O(epsilon-2)$.
arXiv Detail & Related papers (2021-12-09T02:27:52Z) - Fast Projected Newton-like Method for Precision Matrix Estimation under
Total Positivity [15.023842222803058]
Current algorithms are designed using the block coordinate descent method or the proximal point algorithm.
We propose a novel algorithm based on the two-metric projection method, incorporating a carefully designed search direction and variable partitioning scheme.
Experimental results on synthetic and real-world datasets demonstrate that our proposed algorithm provides a significant improvement in computational efficiency compared to the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-03T14:39:10Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Quantum Algorithms for Prediction Based on Ridge Regression [0.7612218105739107]
We propose a quantum algorithm based on ridge regression model, which get the optimal fitting parameters.
The proposed algorithm has a wide range of application and the proposed algorithm can be used as a subroutine of other quantum algorithms.
arXiv Detail & Related papers (2021-04-27T11:03:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.