Efficient Tensor Robust PCA under Hybrid Model of Tucker and Tensor
Train
- URL: http://arxiv.org/abs/2112.10771v1
- Date: Mon, 20 Dec 2021 01:15:45 GMT
- Title: Efficient Tensor Robust PCA under Hybrid Model of Tucker and Tensor
Train
- Authors: Yuning Qiu, Guoxu Zhou, Zhenhao Huang, Qibin Zhao, Shengli Xie
- Abstract summary: We propose an efficient principal component analysis (TRPCA) under hybrid model of Tucker and TT.
Specifically, in theory we reveal that TT nuclear norm (TTNN) of the original big tensor can be equivalently converted to that of a much smaller tensor via a Tucker compression format.
Numerical experiments on both synthetic and real-world tensor data verify the superiority of the proposed model.
- Score: 33.33426557160802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor robust principal component analysis (TRPCA) is a fundamental model in
machine learning and computer vision. Recently, tensor train (TT) decomposition
has been verified effective to capture the global low-rank correlation for
tensor recovery tasks. However, due to the large-scale tensor data in
real-world applications, previous TRPCA models often suffer from high
computational complexity. In this letter, we propose an efficient TRPCA under
hybrid model of Tucker and TT. Specifically, in theory we reveal that TT
nuclear norm (TTNN) of the original big tensor can be equivalently converted to
that of a much smaller tensor via a Tucker compression format, thereby
significantly reducing the computational cost of singular value decomposition
(SVD). Numerical experiments on both synthetic and real-world tensor data
verify the superiority of the proposed model.
Related papers
- Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - Deep Unfolded Tensor Robust PCA with Self-supervised Learning [21.710932587432396]
We describe a fast and simple self-supervised model for tensor RPCA using deep unfolding.
Our model expunges the need for ground truth labels while maintaining competitive or even greater performance.
We demonstrate these claims on a mix of synthetic data and real-world tasks.
arXiv Detail & Related papers (2022-12-21T20:34:42Z) - Tensor Robust PCA with Nonconvex and Nonlocal Regularization [16.15616361268236]
We develop a nonlocality TRPCA (N-TRPCA) model for low-rank data recovery.
We show that the proposed N-TRPCA outperforms existing methods in visual data recovery.
arXiv Detail & Related papers (2022-11-04T12:19:39Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Semi-tensor Product-based TensorDecomposition for Neural Network
Compression [57.95644775091316]
This paper generalizes classical matrix product-based mode product to semi-tensor mode product.
As it permits the connection of two factors with different dimensionality, more flexible and compact tensor decompositions can be obtained.
arXiv Detail & Related papers (2021-09-30T15:18:14Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Low-Rank and Sparse Enhanced Tucker Decomposition for Tensor Completion [3.498620439731324]
We introduce a unified low-rank and sparse enhanced Tucker decomposition model for tensor completion.
Our model possesses a sparse regularization term to promote a sparse core tensor, which is beneficial for tensor data compression.
It is remarkable that our model is able to deal with different types of real-world data sets, since it exploits the potential periodicity and inherent correlation properties appeared in tensors.
arXiv Detail & Related papers (2020-10-01T12:45:39Z) - Robust Tensor Principal Component Analysis: Exact Recovery via
Deterministic Model [5.414544833902815]
This paper proposes a new method to analyze Robust tensor principal component analysis (RTPCA)
It is based on the recently developed tensor-tensor product and tensor singular value decomposition (t-SVD)
arXiv Detail & Related papers (2020-08-05T16:26:10Z) - Hybrid Tensor Decomposition in Neural Network Compression [13.146051056642904]
We introduce the hierarchical Tucker (HT) decomposition method to investigate its capability in neural network compression.
We experimentally discover that the HT format has better performance on compressing weight matrices, while the TT format is more suited for compressing convolutional kernels.
arXiv Detail & Related papers (2020-06-29T11:16:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.