Decomposition of linear tensor transformations
- URL: http://arxiv.org/abs/2309.07819v1
- Date: Thu, 14 Sep 2023 16:14:38 GMT
- Title: Decomposition of linear tensor transformations
- Authors: Claudio Turchetti
- Abstract summary: The aim of this paper is to develop a mathematical framework for exact tensor decomposition.
In the paper three different problems will be carried out to derive.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: One of the main issues in computing a tensor decomposition is how to choose
the number of rank-one components, since there is no finite algorithms for
determining the rank of a tensor. A commonly used approach for this purpose is
to find a low-dimensional subspace by solving an optimization problem and
assuming the number of components is fixed. However, even though this algorithm
is efficient and easy to implement, it often converges to poor local minima and
suffers from outliers and noise. The aim of this paper is to develop a
mathematical framework for exact tensor decomposition that is able to represent
a tensor as the sum of a finite number of low-rank tensors. In the paper three
different problems will be carried out to derive: i) the decomposition of a
non-negative self-adjoint tensor operator; ii) the decomposition of a linear
tensor transformation; iii) the decomposition of a generic tensor.
Related papers
- Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Minimizing low-rank models of high-order tensors: Hardness, span, tight
relaxation, and applications [26.602371644194143]
We show that this fundamental tensor problem is NP-hard for any tensor rank higher than one.
For low-enough ranks, the proposed continuous reformulation is amenable to low-complexity gradient-based optimization.
We demonstrate promising results on a number of hard problems, including low density parity check codes and general decoding parity check codes.
arXiv Detail & Related papers (2022-10-16T11:53:42Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Fast Low-Rank Tensor Decomposition by Ridge Leverage Score Sampling [5.740578698172382]
We study Tucker decompositions and use tools from randomized numerical linear algebra called ridge leverage scores.
We show how to use approximate ridge leverage scores to construct a sketched instance for any ridge regression problem.
We demonstrate the effectiveness of our approximate ridge regressioni algorithm for large, low-rank Tucker decompositions on both synthetic and real-world data.
arXiv Detail & Related papers (2021-07-22T13:32:47Z) - Understanding Deflation Process in Over-parametrized Tensor
Decomposition [17.28303004783945]
We study the training dynamics for gradient flow on over-parametrized tensor decomposition problems.
Empirically, such training process often first fits larger components and then discovers smaller components.
arXiv Detail & Related papers (2021-06-11T18:51:36Z) - Alternating linear scheme in a Bayesian framework for low-rank tensor
approximation [5.833272638548154]
We find a low-rank representation for a given tensor by solving a Bayesian inference problem.
We present an algorithm that performs the unscented transform in tensor train format.
arXiv Detail & Related papers (2020-12-21T10:15:30Z) - Beyond Lazy Training for Over-parameterized Tensor Decomposition [69.4699995828506]
We show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
arXiv Detail & Related papers (2020-10-22T00:32:12Z) - Robust Tensor Principal Component Analysis: Exact Recovery via
Deterministic Model [5.414544833902815]
This paper proposes a new method to analyze Robust tensor principal component analysis (RTPCA)
It is based on the recently developed tensor-tensor product and tensor singular value decomposition (t-SVD)
arXiv Detail & Related papers (2020-08-05T16:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.