Multidimensional Data Analysis Based on Block Convolutional Tensor
Decomposition
- URL: http://arxiv.org/abs/2308.01768v2
- Date: Fri, 11 Aug 2023 21:23:56 GMT
- Title: Multidimensional Data Analysis Based on Block Convolutional Tensor
Decomposition
- Authors: Mahdi Molavi, Mansoor Rezghi, and Tayyebeh Saeedi
- Abstract summary: We propose a new tensor-tensor product called the $star_ctext-Product$ based on Block convolution with reflective boundary conditions.
We also introduce a tensor decomposition based on our $star_ctext-Product$ for arbitrary order tensors.
Compared to t-SVD, our new decomposition has lower complexity, and experiments show that it yields higher-quality results in applications such as classification and compression.
- Score: 1.1674893622721483
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor decompositions are powerful tools for analyzing multi-dimensional data
in their original format. Besides tensor decompositions like Tucker and CP,
Tensor SVD (t-SVD) which is based on the t-product of tensors is another
extension of SVD to tensors that recently developed and has found numerous
applications in analyzing high dimensional data. This paper offers a new
insight into the t-Product and shows that this product is a block convolution
of two tensors with periodic boundary conditions. Based on this viewpoint, we
propose a new tensor-tensor product called the $\star_c{}\text{-Product}$ based
on Block convolution with reflective boundary conditions. Using a tensor
framework, this product can be easily extended to tensors of arbitrary order.
Additionally, we introduce a tensor decomposition based on our
$\star_c{}\text{-Product}$ for arbitrary order tensors. Compared to t-SVD, our
new decomposition has lower complexity, and experiments show that it yields
higher-quality results in applications such as classification and compression.
Related papers
- Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - Decomposition of linear tensor transformations [0.0]
The aim of this paper is to develop a mathematical framework for exact tensor decomposition.
In the paper three different problems will be carried out to derive.
arXiv Detail & Related papers (2023-09-14T16:14:38Z) - Faster Robust Tensor Power Method for Arbitrary Order [15.090593955414137]
emphTensor power method (TPM) is one of the widely-used techniques in the decomposition of tensors.
We apply sketching method, and we are able to achieve the running time of $widetildeO(np-1)$ on the power $p$ and dimension $n$ tensor.
arXiv Detail & Related papers (2023-06-01T07:12:00Z) - Decomposable Sparse Tensor on Tensor Regression [1.370633147306388]
We consider the sparse low rank tensor on tensor regression where predictors $mathcalX$ and responses $mathcalY$ are both high-dimensional tensors.
We propose a fast solution based on stagewise search composed by contraction part and generation part which are optimized alternatively.
arXiv Detail & Related papers (2022-12-09T18:16:41Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Multi-version Tensor Completion for Time-delayed Spatio-temporal Data [50.762087239885936]
Real-world-temporal data is often incomplete or inaccurate due to various data loading delays.
We propose a low-rank tensor model to predict the updates over time.
We obtain up to 27.2% lower root mean-squared-error compared to the best baseline method.
arXiv Detail & Related papers (2021-05-11T19:55:56Z) - Robust Tensor Principal Component Analysis: Exact Recovery via
Deterministic Model [5.414544833902815]
This paper proposes a new method to analyze Robust tensor principal component analysis (RTPCA)
It is based on the recently developed tensor-tensor product and tensor singular value decomposition (t-SVD)
arXiv Detail & Related papers (2020-08-05T16:26:10Z) - Distributed Non-Negative Tensor Train Decomposition [3.2264685979617655]
High-dimensional data is presented as multidimensional arrays, aka tensors.
The presence of latent (not directly observable) structures in the tensor allows a unique representation and compression of the data.
We introduce a distributed non-negative tensor-train and demonstrate its scalability and the compression on synthetic and real-world big datasets.
arXiv Detail & Related papers (2020-08-04T05:35:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.