Effective Streaming Low-tubal-rank Tensor Approximation via Frequent
Directions
- URL: http://arxiv.org/abs/2108.10129v1
- Date: Mon, 23 Aug 2021 12:53:44 GMT
- Title: Effective Streaming Low-tubal-rank Tensor Approximation via Frequent
Directions
- Authors: Qianxin Yi, Chenhao Wang, Kaidong Wang, and Yao Wang
- Abstract summary: This paper extends a popular matrix sketching technique, namely Frequent Directions, for constructing an efficient and accurate low-tubal-rank tensor approximation.
Specifically, the new algorithm allows the tensor data to be observed slice by slice, but only needs to maintain and incrementally update a much smaller sketch.
The rigorous theoretical analysis shows that the approximation error of the new algorithm can be arbitrarily small when the sketch size grows linearly.
- Score: 9.43704219585568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-tubal-rank tensor approximation has been proposed to analyze large-scale
and multi-dimensional data. However, finding such an accurate approximation is
challenging in the streaming setting, due to the limited computational
resources. To alleviate this issue, this paper extends a popular matrix
sketching technique, namely Frequent Directions, for constructing an efficient
and accurate low-tubal-rank tensor approximation from streaming data based on
the tensor Singular Value Decomposition (t-SVD). Specifically, the new
algorithm allows the tensor data to be observed slice by slice, but only needs
to maintain and incrementally update a much smaller sketch which could capture
the principal information of the original tensor. The rigorous theoretical
analysis shows that the approximation error of the new algorithm can be
arbitrarily small when the sketch size grows linearly. Extensive experimental
results on both synthetic and real multi-dimensional data further reveal the
superiority of the proposed algorithm compared with other sketching algorithms
for getting low-tubal-rank approximation, in terms of both efficiency and
accuracy.
Related papers
- Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent [30.299284742925852]
This paper tackles tensor robust principal component analysis (RPCA)
It aims to recover a low-rank tensor from its observations contaminated by sparse corruptions.
We show that the proposed algorithm achieves better and more scalable performance than state-of-the-art matrix and tensor RPCA algorithms.
arXiv Detail & Related papers (2022-06-18T04:01:32Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Learning Optical Flow from a Few Matches [67.83633948984954]
We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
arXiv Detail & Related papers (2021-04-05T21:44:00Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Tensor Networks contraction and the Belief Propagation algorithm [0.0]
Belief propagation is a well-studied message-passing algorithm that runs over graphical models.
We show how it can be adapted to the world of PEPS tensor networks and used as an approximate contraction scheme.
arXiv Detail & Related papers (2020-08-10T22:03:25Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z) - Enhanced nonconvex low-rank approximation of tensor multi-modes for
tensor completion [1.3406858660972554]
We propose a novel low-rank approximation tensor multi-modes (LRATM)
A block-bound method-based algorithm is designed to efficiently solve the proposed model.
Numerical results on three types of public multi-dimensional datasets have tested and shown that our algorithm can recover a variety of low-rank tensors.
arXiv Detail & Related papers (2020-05-28T08:53:54Z) - Efficient Alternating Least Squares Algorithms for Low Multilinear Rank
Approximation of Tensors [6.308492837096872]
We propose a new class of truncated HOSVD algorithms based on alternating least squares (ALS) for efficiently computing the low multilinear rank approximation of tensors.
ALS-based approaches are able to eliminate the redundant computations of the singular vectors of intermediate matrices and are therefore free of data explosion.
Numerical experiments with large-scale tensors from both synthetic and real-world applications demonstrate that ALS-based methods can substantially reduce the total cost of the original ones and are highly scalable for parallel computing.
arXiv Detail & Related papers (2020-04-06T11:58:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.