Fast Low-Rank Tensor Decomposition by Ridge Leverage Score Sampling
- URL: http://arxiv.org/abs/2107.10654v1
- Date: Thu, 22 Jul 2021 13:32:47 GMT
- Title: Fast Low-Rank Tensor Decomposition by Ridge Leverage Score Sampling
- Authors: Matthew Fahrbach, Mehrdad Ghadiri, Thomas Fu
- Abstract summary: We study Tucker decompositions and use tools from randomized numerical linear algebra called ridge leverage scores.
We show how to use approximate ridge leverage scores to construct a sketched instance for any ridge regression problem.
We demonstrate the effectiveness of our approximate ridge regressioni algorithm for large, low-rank Tucker decompositions on both synthetic and real-world data.
- Score: 5.740578698172382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-rank tensor decomposition generalizes low-rank matrix approximation and
is a powerful technique for discovering low-dimensional structure in
high-dimensional data. In this paper, we study Tucker decompositions and use
tools from randomized numerical linear algebra called ridge leverage scores to
accelerate the core tensor update step in the widely-used alternating least
squares (ALS) algorithm. Updating the core tensor, a severe bottleneck in ALS,
is a highly-structured ridge regression problem where the design matrix is a
Kronecker product of the factor matrices. We show how to use approximate ridge
leverage scores to construct a sketched instance for any ridge regression
problem such that the solution vector for the sketched problem is a
$(1+\varepsilon)$-approximation to the original instance. Moreover, we show
that classical leverage scores suffice as an approximation, which then allows
us to exploit the Kronecker structure and update the core tensor in time that
depends predominantly on the rank and the sketching parameters (i.e., sublinear
in the size of the input tensor). We also give upper bounds for ridge leverage
scores as rows are removed from the design matrix (e.g., if the tensor has
missing entries), and we demonstrate the effectiveness of our approximate ridge
regressioni algorithm for large, low-rank Tucker decompositions on both
synthetic and real-world data.
Related papers
- Highly Adaptive Ridge [84.38107748875144]
We propose a regression method that achieves a $n-2/3$ dimension-free L2 convergence rate in the class of right-continuous functions with square-integrable sectional derivatives.
Har is exactly kernel ridge regression with a specific data-adaptive kernel based on a saturated zero-order tensor-product spline basis expansion.
We demonstrate empirical performance better than state-of-the-art algorithms for small datasets in particular.
arXiv Detail & Related papers (2024-10-03T17:06:06Z) - On Learning Gaussian Multi-index Models with Gradient Flow [57.170617397894404]
We study gradient flow on the multi-index regression problem for high-dimensional Gaussian data.
We consider a two-timescale algorithm, whereby the low-dimensional link function is learnt with a non-parametric model infinitely faster than the subspace parametrizing the low-rank projection.
arXiv Detail & Related papers (2023-10-30T17:55:28Z) - Noise-Augmented $\ell_0$ Regularization of Tensor Regression with Tucker
Decomposition [13.38920246791158]
Low-rank decomposition-based regression methods with tensor predictors exploit the structural information in tensor predictors.
We propose a method named NA$$CT$2$ to regularize the parameters in tensor regression.
arXiv Detail & Related papers (2023-02-19T02:40:35Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Subquadratic Kronecker Regression with Applications to Tensor
Decomposition [4.391912559062643]
We present the first subquadratic-time algorithm for solving Kronecker regression to a $(varepsilon-N)$-approximation.
Our techniques combine leverage score sampling and iterative methods.
We demonstrate the speed and accuracy of this Kronecker regression algorithm on synthetic data and real-world image tensors.
arXiv Detail & Related papers (2022-09-11T14:24:19Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm
Regularization [44.54772242784423]
We develop an efficient non regularization algorithm for low-rank learning matrix.
The proposed algorithm can avoid expensive folding/unfolding problems.
Experiments show that the proposed algorithm is efficient and more space than the existing state-of-the-world.
arXiv Detail & Related papers (2022-05-06T07:47:10Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Low-Rank and Sparse Enhanced Tucker Decomposition for Tensor Completion [3.498620439731324]
We introduce a unified low-rank and sparse enhanced Tucker decomposition model for tensor completion.
Our model possesses a sparse regularization term to promote a sparse core tensor, which is beneficial for tensor data compression.
It is remarkable that our model is able to deal with different types of real-world data sets, since it exploits the potential periodicity and inherent correlation properties appeared in tensors.
arXiv Detail & Related papers (2020-10-01T12:45:39Z) - Sparse and Low-Rank High-Order Tensor Regression via Parallel Proximal
Method [6.381138694845438]
We propose the Sparse and Low-rank Regression model for large-scale data with high-order structures.
Our model enforces sparsity and low-rankness of the tensor coefficient.
Our model's predictions exhibit meaningful interpretations on the video dataset.
arXiv Detail & Related papers (2019-11-29T06:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.