Error Analysis of Tensor-Train Cross Approximation
- URL: http://arxiv.org/abs/2207.04327v3
- Date: Sat, 24 Jun 2023 17:25:37 GMT
- Title: Error Analysis of Tensor-Train Cross Approximation
- Authors: Zhen Qin, Alexander Lidiak, Zhexuan Gong, Gongguo Tang, Michael B.
Wakin and Zhihui Zhu
- Abstract summary: We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
- Score: 88.83467216606778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor train decomposition is widely used in machine learning and quantum
physics due to its concise representation of high-dimensional tensors,
overcoming the curse of dimensionality. Cross approximation-originally
developed for representing a matrix from a set of selected rows and columns-is
an efficient method for constructing a tensor train decomposition of a tensor
from few of its entries. While tensor train cross approximation has achieved
remarkable performance in practical applications, its theoretical analysis, in
particular regarding the error of the approximation, is so far lacking. To our
knowledge, existing results only provide element-wise approximation accuracy
guarantees, which lead to a very loose bound when extended to the entire
tensor. In this paper, we bridge this gap by providing accuracy guarantees in
terms of the entire tensor for both exact and noisy measurements. Our results
illustrate how the choice of selected subtensors affects the quality of the
cross approximation and that the approximation error caused by model error
and/or measurement error may not grow exponentially with the order of the
tensor. These results are verified by numerical experiments, and may have
important implications for the usefulness of cross approximations for
high-order tensors, such as those encountered in the description of quantum
many-body states.
Related papers
- High-Dimensional Tensor Discriminant Analysis with Incomplete Tensors [5.745276598549783]
We introduce a novel approach to tensor classification with incomplete data, framed within high-dimensional linear discriminant analysis.
Our method demonstrates excellent performance in simulations and real data analysis, even with significant proportions of missing data.
arXiv Detail & Related papers (2024-10-18T18:00:16Z) - Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - TERM Model: Tensor Ring Mixture Model for Density Estimation [48.622060998018206]
In this paper, we take tensor ring decomposition for density estimator, which significantly reduces the number of permutation candidates.
A mixture model that incorporates multiple permutation candidates with adaptive weights is further designed, resulting in increased expressive flexibility.
This approach acknowledges that suboptimal permutations can offer distinctive information besides that of optimal permutations.
arXiv Detail & Related papers (2023-12-13T11:39:56Z) - Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - Generative Modeling via Hierarchical Tensor Sketching [12.005736675688917]
We propose a hierarchical tensor-network approach for approximating high-dimensional probability density via empirical distribution.
The complexity of the resulting algorithm scales linearly in the dimension of the high-dimensional density.
arXiv Detail & Related papers (2023-04-11T15:55:13Z) - Optimizing Orthogonalized Tensor Deflation via Random Tensor Theory [5.124256074746721]
This paper tackles the problem of recovering a low-rank signal tensor with possibly correlated components from a random noisy tensor.
Non-orthogonal components may alter the tensor deflation mechanism, thereby preventing efficient recovery.
An efficient tensor deflation algorithm is proposed by optimizing the parameter introduced in the deflation mechanism.
arXiv Detail & Related papers (2023-02-11T22:23:27Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Alternating linear scheme in a Bayesian framework for low-rank tensor
approximation [5.833272638548154]
We find a low-rank representation for a given tensor by solving a Bayesian inference problem.
We present an algorithm that performs the unscented transform in tensor train format.
arXiv Detail & Related papers (2020-12-21T10:15:30Z) - Uncertainty quantification for nonconvex tensor completion: Confidence
intervals, heteroscedasticity and optimality [92.35257908210316]
We study the problem of estimating a low-rank tensor given incomplete and corrupted observations.
We find that it attains unimprovable rates $ell-2$ accuracy.
arXiv Detail & Related papers (2020-06-15T17:47:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.