Faster Robust Tensor Power Method for Arbitrary Order
- URL: http://arxiv.org/abs/2306.00406v1
- Date: Thu, 1 Jun 2023 07:12:00 GMT
- Title: Faster Robust Tensor Power Method for Arbitrary Order
- Authors: Yichuan Deng, Zhao Song, Junze Yin
- Abstract summary: emphTensor power method (TPM) is one of the widely-used techniques in the decomposition of tensors.
We apply sketching method, and we are able to achieve the running time of $widetildeO(np-1)$ on the power $p$ and dimension $n$ tensor.
- Score: 15.090593955414137
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Tensor decomposition is a fundamental method used in various areas to deal
with high-dimensional data. \emph{Tensor power method} (TPM) is one of the
widely-used techniques in the decomposition of tensors. This paper presents a
novel tensor power method for decomposing arbitrary order tensors, which
overcomes limitations of existing approaches that are often restricted to
lower-order (less than $3$) tensors or require strong assumptions about the
underlying data structure. We apply sketching method, and we are able to
achieve the running time of $\widetilde{O}(n^{p-1})$, on the power $p$ and
dimension $n$ tensor. We provide a detailed analysis for any $p$-th order
tensor, which is never given in previous works.
Related papers
- Overcomplete Tensor Decomposition via Koszul-Young Flattenings [63.01248796170617]
We give a new algorithm for decomposing an $n_times n times n_3$ tensor as the sum of a minimal number of rank-1 terms.
We show that an even more general class of degree-$d$s cannot surpass rank $Cn$ for a constant $C = C(d)$.
arXiv Detail & Related papers (2024-11-21T17:41:09Z) - Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - Decomposition of linear tensor transformations [0.0]
The aim of this paper is to develop a mathematical framework for exact tensor decomposition.
In the paper three different problems will be carried out to derive.
arXiv Detail & Related papers (2023-09-14T16:14:38Z) - On the Accuracy of Hotelling-Type Tensor Deflation: A Random Tensor
Analysis [16.28927188636617]
A rank-$r$ asymmetric spiked model of the form $sum_i=1r beta_i A_i + W$ is considered.
We provide a study of Hotelling-type deflation in the large dimensional regime.
arXiv Detail & Related papers (2022-11-16T16:01:56Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Lower Bounds for the Convergence of Tensor Power Iteration on Random
Overcomplete Models [3.7565501074323224]
We show that tensorly many steps are necessary for convergence of tensor power iteration to any true component.
We prove that a popular objective function for tensor decomposition is strictly increasing along the power iteration path.
arXiv Detail & Related papers (2022-11-07T19:23:51Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - Multi-version Tensor Completion for Time-delayed Spatio-temporal Data [50.762087239885936]
Real-world-temporal data is often incomplete or inaccurate due to various data loading delays.
We propose a low-rank tensor model to predict the updates over time.
We obtain up to 27.2% lower root mean-squared-error compared to the best baseline method.
arXiv Detail & Related papers (2021-05-11T19:55:56Z) - Tensor Completion via Tensor Networks with a Tucker Wrapper [28.83358353043287]
We propose to solve low-rank tensor completion (LRTC) via tensor networks with a Tucker wrapper.
A two-level alternative least square method is then employed to update the unknown factors.
Numerical simulations show that the proposed algorithm is comparable with state-of-the-art methods.
arXiv Detail & Related papers (2020-10-29T17:54:01Z) - Beyond Lazy Training for Over-parameterized Tensor Decomposition [69.4699995828506]
We show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
arXiv Detail & Related papers (2020-10-22T00:32:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.