Scalable and Robust Tensor Ring Decomposition for Large-scale Data
- URL: http://arxiv.org/abs/2305.09044v1
- Date: Mon, 15 May 2023 22:08:47 GMT
- Title: Scalable and Robust Tensor Ring Decomposition for Large-scale Data
- Authors: Yicong He and George K. Atia
- Abstract summary: We propose a scalable and robust TR decomposition algorithm capable of handling large-scale tensor data with missing entries and gross corruptions.
We first develop a novel auto-weighted steepest descent method that can adaptively fill the missing entries and identify the outliers during the decomposition process.
- Score: 12.02023514105999
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor ring (TR) decomposition has recently received increased attention due
to its superior expressive performance for high-order tensors. However, the
applicability of traditional TR decomposition algorithms to real-world
applications is hindered by prevalent large data sizes, missing entries, and
corruption with outliers. In this work, we propose a scalable and robust TR
decomposition algorithm capable of handling large-scale tensor data with
missing entries and gross corruptions. We first develop a novel auto-weighted
steepest descent method that can adaptively fill the missing entries and
identify the outliers during the decomposition process. Further, taking
advantage of the tensor ring model, we develop a novel fast Gram matrix
computation (FGMC) approach and a randomized subtensor sketching (RStS)
strategy which yield significant reduction in storage and computational
complexity. Experimental results demonstrate that the proposed method
outperforms existing TR decomposition methods in the presence of outliers, and
runs significantly faster than existing robust tensor completion algorithms.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - Robust Data Clustering with Outliers via Transformed Tensor Low-Rank Representation [4.123899820318987]
tensor low-rank representation (TLRR) has become a popular tool for tensor data recovery and clustering.
This paper develops an outlier-robust tensor low-rank representation (OR-TLRR)
OR-TLRR provides outlier detection and tensor data clustering simultaneously based on the t-SVD framework.
arXiv Detail & Related papers (2023-07-18T08:11:08Z) - Towards Efficient and Accurate Approximation: Tensor Decomposition Based
on Randomized Block Krylov Iteration [27.85452105378894]
This work designs an rBKI-based Tucker decomposition (rBKI-TK) for accurate approximation, together with a hierarchical tensor ring decomposition based on rBKI-TK for efficient compression of large-scale data.
Numerical experiences demonstrate the efficiency, accuracy and scalability of the proposed methods in both data compression and denoising.
arXiv Detail & Related papers (2022-11-27T13:45:28Z) - Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent [30.299284742925852]
This paper tackles tensor robust principal component analysis (RPCA)
It aims to recover a low-rank tensor from its observations contaminated by sparse corruptions.
We show that the proposed algorithm achieves better and more scalable performance than state-of-the-art matrix and tensor RPCA algorithms.
arXiv Detail & Related papers (2022-06-18T04:01:32Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Distributed Dynamic Safe Screening Algorithms for Sparse Regularization [73.85961005970222]
We propose a new distributed dynamic safe screening (DDSS) method for sparsity regularized models and apply it on shared-memory and distributed-memory architecture respectively.
We prove that the proposed method achieves the linear convergence rate with lower overall complexity and can eliminate almost all the inactive features in a finite number of iterations almost surely.
arXiv Detail & Related papers (2022-04-23T02:45:55Z) - Fast Hypergraph Regularized Nonnegative Tensor Ring Factorization Based
on Low-Rank Approximation [19.43953011889585]
Nonnegative tensor ring (NTR) decomposition equipped with manifold learning has become a promising model to exploit the multi-dimensional structure.
In this paper, we introduce hypergraph to the framework of NTR to further enhance the feature extraction.
To reduce the computational complexity and suppress the noise, we apply the low-rank approximation trick to accelerate HGNTR.
arXiv Detail & Related papers (2021-09-06T09:24:51Z) - Augmented Tensor Decomposition with Stochastic Optimization [46.16865811396394]
Real-world tensor data are usually high-ordered and have large dimensions with millions or billions of entries.
It is expensive to decompose the whole tensor with traditional algorithms.
This paper proposes augmented tensor decomposition, which effectively incorporates data augmentations to boost downstream classification.
arXiv Detail & Related papers (2021-06-15T06:29:05Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Graph Regularized Nonnegative Tensor Ring Decomposition for Multiway
Representation Learning [38.70369173200596]
Nonnegative tensor ring (NTR) decomposition and graph regularized NTR (GNTR) decomposition are proposed.
The proposed algorithms can extract parts-based basis with rich colors and rich lines from tensor objects that provide more interpretable and meaningful representation.
arXiv Detail & Related papers (2020-10-12T12:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.