Grassmannian Optimization for Online Tensor Completion and Tracking with
the t-SVD
- URL: http://arxiv.org/abs/2001.11419v4
- Date: Thu, 14 Apr 2022 19:36:41 GMT
- Title: Grassmannian Optimization for Online Tensor Completion and Tracking with
the t-SVD
- Authors: Kyle Gilman, Davoud Ataee Tarzanagh, and Laura Balzano
- Abstract summary: We show the t-SVD is a specialization of the well-studied block-term decomposition for third-order tensors.
We present an algorithm under this model that can track changing free submodules from incomplete streaming 2-D data.
Our results are competitive in accuracy but much faster in compute time than state-of-the-art tensor completion algorithms on real applications.
- Score: 10.137631021498109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new fast streaming algorithm for the tensor completion problem
of imputing missing entries of a low-tubal-rank tensor using the tensor
singular value decomposition (t-SVD) algebraic framework. We show the t-SVD is
a specialization of the well-studied block-term decomposition for third-order
tensors, and we present an algorithm under this model that can track changing
free submodules from incomplete streaming 2-D data. The proposed algorithm uses
principles from incremental gradient descent on the Grassmann manifold of
subspaces to solve the tensor completion problem with linear complexity and
constant memory in the number of time samples. We provide a local expected
linear convergence result for our algorithm. Our empirical results are
competitive in accuracy but much faster in compute time than state-of-the-art
tensor completion algorithms on real applications to recover temporal
chemo-sensing and MRI data under limited sampling.
Related papers
- A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - Tensor Completion via Integer Optimization [7.813563137863005]
The main challenge with the tensor completion problem is a fundamental tension between computation power and the information-theoretic sample complexity rate.
Past approaches either achieve the information-theoretic rate but lack practical algorithms to compute the corresponding solution.
This paper develops a novel tensor completion algorithm that resolves this tension by achieving both provable convergence (in numerical tolerance) in a linear number of oracle steps and the information-theoretic rate.
arXiv Detail & Related papers (2024-02-06T21:44:07Z) - Tensor Completion via Leverage Sampling and Tensor QR Decomposition for
Network Latency Estimation [2.982069479212266]
A large scale of network latency estimation requires a lot of computing time.
We propose a new method that is much faster and maintains high accuracy.
Numerical experiments witness that our method is faster than state-of-art algorithms with satisfactory accuracy.
arXiv Detail & Related papers (2023-06-27T07:21:26Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Latent Matrices for Tensor Network Decomposition and to Tensor
Completion [8.301418317685906]
We propose a novel higher-order tensor decomposition model that decomposes the tensor into smaller ones and speeds up the computation of the algorithm.
Three optimization algorithms, LMTN-PAM, LMTN-SVD and LMTN-AR, have been developed and applied to the tensor-completion task.
Experimental results show that our LMTN-SVD algorithm is 3-6 times faster than the FCTN-PAM algorithm and only a 1.8 points accuracy drop.
arXiv Detail & Related papers (2022-10-07T08:19:50Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - New Riemannian preconditioned algorithms for tensor completion via
polyadic decomposition [10.620193291237262]
These algorithms exploit a non-Euclidean metric on the product space of the factor matrices of the low-rank tensor in the polyadic decomposition form.
We prove that the proposed gradient descent algorithm globally converges to a stationary point of the tensor completion problem.
Numerical results on synthetic and real-world data suggest that the proposed algorithms are more efficient in memory and time compared to state-of-the-art algorithms.
arXiv Detail & Related papers (2021-01-26T22:11:06Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.