Efficient Alternating Least Squares Algorithms for Low Multilinear Rank
Approximation of Tensors
- URL: http://arxiv.org/abs/2004.02583v4
- Date: Fri, 2 Apr 2021 02:45:12 GMT
- Title: Efficient Alternating Least Squares Algorithms for Low Multilinear Rank
Approximation of Tensors
- Authors: Chuanfu Xiao and Chao Yang and Min Li
- Abstract summary: We propose a new class of truncated HOSVD algorithms based on alternating least squares (ALS) for efficiently computing the low multilinear rank approximation of tensors.
ALS-based approaches are able to eliminate the redundant computations of the singular vectors of intermediate matrices and are therefore free of data explosion.
Numerical experiments with large-scale tensors from both synthetic and real-world applications demonstrate that ALS-based methods can substantially reduce the total cost of the original ones and are highly scalable for parallel computing.
- Score: 6.308492837096872
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The low multilinear rank approximation, also known as the truncated Tucker
decomposition, has been extensively utilized in many applications that involve
higher-order tensors. Popular methods for low multilinear rank approximation
usually rely directly on matrix SVD, therefore often suffer from the notorious
intermediate data explosion issue and are not easy to parallelize, especially
when the input tensor is large. In this paper, we propose a new class of
truncated HOSVD algorithms based on alternating least squares (ALS) for
efficiently computing the low multilinear rank approximation of tensors. The
proposed ALS-based approaches are able to eliminate the redundant computations
of the singular vectors of intermediate matrices and are therefore free of data
explosion. Also, the new methods are more flexible with adjustable convergence
tolerance and are intrinsically parallelizable on high-performance computers.
Theoretical analysis reveals that the ALS iteration in the proposed algorithms
is q-linear convergent with a relatively wide convergence region. Numerical
experiments with large-scale tensors from both synthetic and real-world
applications demonstrate that ALS-based methods can substantially reduce the
total cost of the original ones and are highly scalable for parallel computing.
Related papers
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Tensor and Matrix Low-Rank Value-Function Approximation in Reinforcement Learning [11.317136648551536]
Value-function approximation is a central problem in Reinforcement Learning (RL)
This paper puts forth a parsimonious non-parametric approach, where we use low-rank algorithms to estimate the VF matrix in an online and model-free fashion.
As VFs tend to be multi-dimensional, we propose replacing the classical VF matrix representation with a tensor representation and, then, use the PARAFAC decomposition to design an online model-free tensor low-rank algorithm.
arXiv Detail & Related papers (2022-01-21T00:13:54Z) - Tensor Network Kalman Filtering for Large-Scale LS-SVMs [17.36231167296782]
Least squares support vector machines are used for nonlinear regression and classification.
A framework based on tensor networks and the Kalman filter is presented to alleviate the demanding memory and computational complexities.
Results show that our method can achieve high performance and is particularly useful when alternative methods are computationally infeasible.
arXiv Detail & Related papers (2021-10-26T08:54:03Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Approximation Algorithms for Sparse Best Rank-1 Approximation to
Higher-Order Tensors [1.827510863075184]
Sparse tensor best rank-1 approximation (BR1Approx) is a sparsity generalization of the dense tensor BR1Approx.
Four approximation algorithms are proposed, which are easily implemented, of low computational complexity.
We provide numerical experiments on synthetic and real data to illustrate the effectiveness of the proposed algorithms.
arXiv Detail & Related papers (2020-12-05T18:13:14Z) - Self-Weighted Robust LDA for Multiclass Classification with Edge Classes [111.5515086563592]
A novel self-weighted robust LDA with l21-norm based between-class distance criterion, called SWRLDA, is proposed for multi-class classification.
The proposed SWRLDA is easy to implement, and converges fast in practice.
arXiv Detail & Related papers (2020-09-24T12:32:55Z) - Enhanced nonconvex low-rank approximation of tensor multi-modes for
tensor completion [1.3406858660972554]
We propose a novel low-rank approximation tensor multi-modes (LRATM)
A block-bound method-based algorithm is designed to efficiently solve the proposed model.
Numerical results on three types of public multi-dimensional datasets have tested and shown that our algorithm can recover a variety of low-rank tensors.
arXiv Detail & Related papers (2020-05-28T08:53:54Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.