Optimal High-order Tensor SVD via Tensor-Train Orthogonal Iteration
- URL: http://arxiv.org/abs/2010.02482v2
- Date: Mon, 24 Jan 2022 21:45:44 GMT
- Title: Optimal High-order Tensor SVD via Tensor-Train Orthogonal Iteration
- Authors: Yuchen Zhou and Anru R. Zhang and Lili Zheng and Yazhen Wang
- Abstract summary: We propose a new algorithm to estimate the low tensor-train rank structure from the noisy high-order tensor observation.
The merits of the proposed TTOI are illustrated through applications to estimation and dimension reduction of high-order Markov processes.
- Score: 10.034394572576922
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper studies a general framework for high-order tensor SVD. We propose
a new computationally efficient algorithm, tensor-train orthogonal iteration
(TTOI), that aims to estimate the low tensor-train rank structure from the
noisy high-order tensor observation. The proposed TTOI consists of
initialization via TT-SVD (Oseledets, 2011) and new iterative backward/forward
updates. We develop the general upper bound on estimation error for TTOI with
the support of several new representation lemmas on tensor matricizations. By
developing a matching information-theoretic lower bound, we also prove that
TTOI achieves the minimax optimality under the spiked tensor model. The merits
of the proposed TTOI are illustrated through applications to estimation and
dimension reduction of high-order Markov processes, numerical studies, and a
real data example on New York City taxi travel records. The software of the
proposed algorithm is available online$^6$.
Related papers
- Estimating the Hessian Matrix of Ranking Objectives for Stochastic Learning to Rank with Gradient Boosted Trees [63.18324983384337]
We introduce the first learning to rank method for Gradient Boosted Decision Trees (GBDTs)
Our main contribution is a novel estimator for the second-order derivatives, i.e., the Hessian matrix.
We incorporate our estimator into the existing PL-Rank framework, which was originally designed for first-order derivatives only.
arXiv Detail & Related papers (2024-04-18T13:53:32Z) - A Novel Tensor Factorization-Based Method with Robustness to Inaccurate
Rank Estimation [9.058215418134209]
We propose a new tensor norm with a dual low-rank constraint, which utilizes the low-rank prior and rank information at the same time.
It is proven theoretically that the resulting tensor completion model can effectively avoid performance degradation caused by inaccurate rank estimation.
Based on this, the total cost at each iteration of the optimization algorithm is reduced to $mathcalO(n3log n +kn3)$ from $mathcalO(n4)$ achieved with standard methods.
arXiv Detail & Related papers (2023-05-19T06:26:18Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Provable Tensor-Train Format Tensor Completion by Riemannian
Optimization [22.166436026482984]
We provide the first theoretical guarantees of the convergence of RGrad algorithm for TT-format tensor completion.
We also propose a novel approach, referred to as the sequential second-order moment method.
arXiv Detail & Related papers (2021-08-27T08:13:58Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z) - Tensor train rank minimization with nonlocal self-similarity for tensor
completion [27.727973182796678]
tensor train (TT) rank has received increasing attention in tensor completion due to its ability to capture the global correlation of high-order tensors.
For third order visual data, direct TT rank minimization has not exploited the potential of TT rank for high-order tensors.
We propose a TT rank minimization with nonlocal self-similarity for tensor completion by simultaneously exploring the spatial, temporal/spectral, and nonlocal redundancy in visual data.
arXiv Detail & Related papers (2020-04-29T15:39:39Z) - Tensor denoising and completion based on ordinal observations [11.193504036335503]
We consider the problem of low-rank tensor estimation from possibly incomplete, ordinal-valued observations.
We propose a multi-linear cumulative link model, develop a rank-constrained M-estimator, and obtain theoretical accuracy guarantees.
We show that the proposed estimator is minimax optimal under the class of low-rank models.
arXiv Detail & Related papers (2020-02-16T07:09:56Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z) - Optimal Sparse Singular Value Decomposition for High-dimensional High-order Data [4.987670632802288]
We consider the sparse tensor singular value decomposition, which aims for reduction on high-dimensional high-order data with certain sparsity structure.
A method named Sparse Alternating Thresholding for Singular Value Decomposition (STAT-SVD) is proposed.
The proposed procedure features a novel double projection & thresholding scheme, which provides a sharp criterion for thresholding in each iteration.
arXiv Detail & Related papers (2018-09-06T02:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.