Weighted Low Rank Matrix Approximation and Acceleration
- URL: http://arxiv.org/abs/2109.11057v1
- Date: Wed, 22 Sep 2021 22:03:48 GMT
- Title: Weighted Low Rank Matrix Approximation and Acceleration
- Authors: Elena Tuzhilina, Trevor Hastie
- Abstract summary: Low-rank matrix approximation is one of the central concepts in machine learning.
Low-rank matrix completion (LRMC) solves the LRMA problem when some observations are missing.
We propose an algorithm for solving the weighted problem, as well as two acceleration techniques.
- Score: 0.5177947445379687
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-rank matrix approximation is one of the central concepts in machine
learning, with applications in dimension reduction, de-noising, multivariate
statistical methodology, and many more. A recent extension to LRMA is called
low-rank matrix completion (LRMC). It solves the LRMA problem when some
observations are missing and is especially useful for recommender systems. In
this paper, we consider an element-wise weighted generalization of LRMA. The
resulting weighted low-rank matrix approximation technique therefore covers
LRMC as a special case with binary weights. WLRMA has many applications. For
example, it is an essential component of GLM optimization algorithms, where an
exponential family is used to model the entries of a matrix, and the matrix of
natural parameters admits a low-rank structure. We propose an algorithm for
solving the weighted problem, as well as two acceleration techniques. Further,
we develop a non-SVD modification of the proposed algorithm that is able to
handle extremely high-dimensional data. We compare the performance of all the
methods on a small simulation example as well as a real-data application.
Related papers
- Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Large-scale gradient-based training of Mixtures of Factor Analyzers [67.21722742907981]
This article contributes both a theoretical analysis as well as a new method for efficient high-dimensional training by gradient descent.
We prove that MFA training and inference/sampling can be performed based on precision matrices, which does not require matrix inversions after training is completed.
Besides the theoretical analysis and matrices, we apply MFA to typical image datasets such as SVHN and MNIST, and demonstrate the ability to perform sample generation and outlier detection.
arXiv Detail & Related papers (2023-08-26T06:12:33Z) - A Majorization-Minimization Gauss-Newton Method for 1-Bit Matrix Completion [15.128477070895055]
We propose a novel method for 1-bit matrix completion called Majorization-Minimization Gauss-Newton (MMGN)
Our method is based on the majorization-minimization principle, which converts the original optimization problem into a sequence of standard low-rank matrix completion problems.
arXiv Detail & Related papers (2023-04-27T03:16:52Z) - A Deep Learning algorithm to accelerate Algebraic Multigrid methods in
Finite Element solvers of 3D elliptic PDEs [0.0]
We introduce a novel Deep Learning algorithm that minimizes the computational cost of the Algebraic multigrid method when used as a finite element solver.
We experimentally prove that the pooling successfully reduces the computational cost of processing a large sparse matrix and preserves the features needed for the regression task at hand.
arXiv Detail & Related papers (2023-04-21T09:18:56Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled
Gradient Descent [34.0533596121548]
Low-rank matrix estimation converges convex problem that finds numerous applications in signal processing, machine learning and imaging science.
We show that ScaledGD achieves the best of the best in terms of the number of the low-rank matrix.
Our analysis is also applicable to general loss that are similar to low-rank gradient descent.
arXiv Detail & Related papers (2020-05-18T17:17:16Z) - Fast Generalized Matrix Regression with Applications in Machine Learning [46.79740387890322]
We propose a fast generalized matrix regression algorithm (Fast GMR) which utilizes sketching technique to solve the GMR problem efficiently.
We apply the Fast GMR algorithm to the symmetric positive definite matrix approximation and single pass singular value decomposition.
arXiv Detail & Related papers (2019-12-27T07:03:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.