Survey of Matrix Completion Algorithms
- URL: http://arxiv.org/abs/2204.01532v2
- Date: Tue, 5 Apr 2022 02:03:47 GMT
- Title: Survey of Matrix Completion Algorithms
- Authors: Jafar Jafarov
- Abstract summary: Matrix completion problem has been investigated under many different conditions since Netflix announced the Netflix Prize problem.
We are going to visit some of the matrix completion methods, mainly in the direction of passive and adaptive directions.
- Score: 3.42658286826597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Matrix completion problem has been investigated under many different
conditions since Netflix announced the Netflix Prize problem. Many research
work has been done in the field once it has been discovered that many real life
dataset could be estimated with a low-rank matrix. Since then compressed
sensing, adaptive signal detection has gained the attention of many
researchers. In this survey paper we are going to visit some of the matrix
completion methods, mainly in the direction of passive and adaptive directions.
First, we discuss passive matrix completion methods with convex optimization,
and the second active matrix completion techniques with adaptive signal
detection methods. Traditionally many machine learning problems are solved in
passive environment. However, later it has been observed that adaptive sensing
algorithms many times performs more efficiently than former algorithms. Hence
algorithms in this setting has been extensively studied. Therefore, we are
going to present some of the latest adaptive matrix completion algorithms in
this paper meanwhile providing passive methods.
Related papers
- Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - Transductive Matrix Completion with Calibration for Multi-Task Learning [3.7660066212240757]
We propose a transductive matrix completion algorithm that incorporates a calibration constraint for the features under the multi-task learning framework.
The proposed algorithm recovers the incomplete feature matrix and target matrix simultaneously.
Several synthetic data experiments are conducted, which show the proposed algorithm out-performs other existing methods.
arXiv Detail & Related papers (2023-02-20T08:47:23Z) - Adaptive Noisy Matrix Completion [0.0]
We assume that the matrix $mathbfM$ we target to recover is composed as low-rank matrix with addition of bounded small noise.
We study this problem in adaptive setting that, we continuously estimate an upper bound for the angle with the underlying low-rank subspace and noise-added subspace.
arXiv Detail & Related papers (2022-03-16T01:20:18Z) - Lifelong Matrix Completion with Sparsity-Number [0.0]
Matrix completion problem has been previously studied under various adaptive and passive settings.
In this paper, we are using the idea of sparsity-number and propose and single-phase column space recovery algorithm.
We show that these methods are as efficient as multi-phase matrix recovery algorithm.
arXiv Detail & Related papers (2022-03-15T04:27:52Z) - Fast Differentiable Matrix Square Root and Inverse Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root and the inverse square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Pad'e Approximants (MPA)
A series of numerical tests show that both methods yield considerable speed-up compared with the SVD or the NS iteration.
arXiv Detail & Related papers (2022-01-29T10:00:35Z) - Fast Differentiable Matrix Square Root [65.67315418971688]
We propose two more efficient variants to compute the differentiable matrix square root.
For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP)
The other method is to use Matrix Pad'e Approximants (MPA)
arXiv Detail & Related papers (2022-01-21T12:18:06Z) - Fundamental Machine Learning Routines as Quantum Algorithms on a
Superconducting Quantum Computer [0.0]
Harrow-Hassidim-Lloyd algorithm is intended for solving the system of linear equations on quantum devices.
We present a numerical study of the performance of the algorithm when these caveats are not perfectly matched.
arXiv Detail & Related papers (2021-09-17T15:22:06Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.