Fast and Accurate Randomized Algorithms for Low-rank Tensor
Decompositions
- URL: http://arxiv.org/abs/2104.01101v1
- Date: Fri, 2 Apr 2021 15:55:02 GMT
- Title: Fast and Accurate Randomized Algorithms for Low-rank Tensor
Decompositions
- Authors: Linjian Ma and Edgar Solomonik
- Abstract summary: Low-rank Tucker and CP tensor decompositions are powerful tools in data analytics.
We propose a fast and accurate sketched ALS algorithm for Tucker decomposition.
It is further used to accelerate CP decomposition, by using randomized Tucker compression followed by CP decomposition of the Tucker core tensor.
- Score: 1.8356693937139124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-rank Tucker and CP tensor decompositions are powerful tools in data
analytics. The widely used alternating least squares (ALS) method, which solves
a sequence of over-determined least squares subproblems, is inefficient for
large and sparse tensors. We propose a fast and accurate sketched ALS algorithm
for Tucker decomposition, which solves a sequence of sketched rank-constrained
linear least squares subproblems. Theoretical sketch size upper bounds are
provided to achieve $O(\epsilon)$-relative error for each subproblem with two
sketching techniques, TensorSketch and leverage score sampling. Experimental
results show that this new ALS algorithm, combined with a new initialization
scheme based on randomized range finder, yields up to $22.0\%$ relative
decomposition residual improvement compared to the state-of-the-art sketched
randomized algorithm for Tucker decomposition of various synthetic datasets.
This Tucker-ALS algorithm is further used to accelerate CP decomposition, by
using randomized Tucker compression followed by CP decomposition of the Tucker
core tensor. Experimental results show that this algorithm not only converges
faster, but also yields more accurate CP decompositions.
Related papers
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - Approximately Optimal Core Shapes for Tensor Decompositions [7.1439425093981574]
We give an algorithm with provable approximation guarantees for reconstruction error via connections to higher-order singular values.
Specifically, we introduce a novel Tucker packing problem, which we prove is NP-hard and give a constraint-time approximation scheme based on a reduction to the 2-dimensional knapsack problem with a matroid.
arXiv Detail & Related papers (2023-02-08T05:24:01Z) - Alternating Mahalanobis Distance Minimization for Stable and Accurate CP
Decomposition [4.847980206213335]
We introduce a new formulation for deriving singular values and vectors of a tensor by considering the critical points of a function different from what is used in the previous work.
We show that a subsweep of this algorithm can achieve a superlinear convergence rate for exact CPD with known rank.
We then view the algorithm as optimizing a Mahalanobis distance with respect to each factor with ground metric dependent on the other factors.
arXiv Detail & Related papers (2022-04-14T19:56:36Z) - A rank-adaptive higher-order orthogonal iteration algorithm for
truncated Tucker decomposition [3.4666021409328653]
We show that the rank-adaptive HOOI algorithm is advantageous in terms of both accuracy and efficiency.
Some further analysis on the HOOI algorithm and the classical alternating least squares method are presented to further understand why rank adaptivity can be introduced into the HOOI algorithm and how it works.
arXiv Detail & Related papers (2021-10-25T00:46:02Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - A Momentum-Assisted Single-Timescale Stochastic Approximation Algorithm
for Bilevel Optimization [112.59170319105971]
We propose a new algorithm -- the Momentum- Single-timescale Approximation (MSTSA) -- for tackling problems.
MSTSA allows us to control the error in iterations due to inaccurate solution to the lower level subproblem.
arXiv Detail & Related papers (2021-02-15T07:10:33Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - NeCPD: An Online Tensor Decomposition with Optimal Stochastic Gradient
Descent [1.0953917735844645]
We propose a new efficient decomposition algorithm named NeCPD for non- efficient problem in multi-way online data based $(N)N.
We further apply this method in the field of reallife monitoring using structural datasets.
arXiv Detail & Related papers (2020-03-18T04:44:05Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.