On Application of Block Kaczmarz Methods in Matrix Factorization
- URL: http://arxiv.org/abs/2010.10635v1
- Date: Tue, 20 Oct 2020 21:29:50 GMT
- Title: On Application of Block Kaczmarz Methods in Matrix Factorization
- Authors: Edwin Chau, Jamie Haddock
- Abstract summary: We discuss a block Kaczmarz solver that replaces the least-squares subroutine in the common alternating scheme for matrix factorization.
We find block sizes that produce a solution comparable to that of the least-squares solver for only a fraction of the runtime and working memory requirement.
- Score: 2.335152769484957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Matrix factorization techniques compute low-rank product approximations of
high dimensional data matrices and as a result, are often employed in
recommender systems and collaborative filtering applications. However, many
algorithms for this task utilize an exact least-squares solver whose
computation is time consuming and memory-expensive. In this paper we discuss
and test a block Kaczmarz solver that replaces the least-squares subroutine in
the common alternating scheme for matrix factorization. This variant trades a
small increase in factorization error for significantly faster algorithmic
performance. In doing so we find block sizes that produce a solution comparable
to that of the least-squares solver for only a fraction of the runtime and
working memory requirement.
Related papers
- An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - A fast Multiplicative Updates algorithm for Non-negative Matrix Factorization [2.646309221150203]
We propose to improve the Multiplicative Updates algorithm by crafting a tighter upper bound of the Hessian matrix for each alternate subproblem.
Convergence is still ensured and we observe in practice on both synthetic and real world dataset that the proposed fastMU algorithm is often several orders of magnitude faster than the regular Multiplicative Updates algorithm.
arXiv Detail & Related papers (2023-03-31T12:09:36Z) - Efficient distributed representations with linear-time attention scores normalization [3.8673630752805437]
We propose a linear-time approximation of the attention score normalization constants for embedding vectors with bounded norms.
The accuracy of our estimation formula surpasses competing kernel methods by even orders of magnitude.
The proposed algorithm is highly interpretable and easily adapted to an arbitrary embedding problem.
arXiv Detail & Related papers (2023-03-30T15:48:26Z) - Sparse resultant based minimal solvers in computer vision and their
connection with the action matrix [17.31412310131552]
We show that for some camera geometry problems our extra-based method leads to smaller and more stable solvers than the state-of-the-art Grobner basis-based solvers.
It provides a competitive alternative to popularner basis-based methods for minimal problems in computer vision.
arXiv Detail & Related papers (2023-01-16T14:25:19Z) - Asymmetric Scalable Cross-modal Hashing [51.309905690367835]
Cross-modal hashing is a successful method to solve large-scale multimedia retrieval issue.
We propose a novel Asymmetric Scalable Cross-Modal Hashing (ASCMH) to address these issues.
Our ASCMH outperforms the state-of-the-art cross-modal hashing methods in terms of accuracy and efficiency.
arXiv Detail & Related papers (2022-07-26T04:38:47Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Fast Projected Newton-like Method for Precision Matrix Estimation under
Total Positivity [15.023842222803058]
Current algorithms are designed using the block coordinate descent method or the proximal point algorithm.
We propose a novel algorithm based on the two-metric projection method, incorporating a carefully designed search direction and variable partitioning scheme.
Experimental results on synthetic and real-world datasets demonstrate that our proposed algorithm provides a significant improvement in computational efficiency compared to the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-03T14:39:10Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.