Robust Tensor CUR Decompositions: Rapid Low-Tucker-Rank Tensor Recovery
with Sparse Corruption
- URL: http://arxiv.org/abs/2305.04080v1
- Date: Sat, 6 May 2023 16:02:37 GMT
- Title: Robust Tensor CUR Decompositions: Rapid Low-Tucker-Rank Tensor Recovery
with Sparse Corruption
- Authors: HanQin Cai, Zehan Chao, Longxiu Huang, and Deanna Needell
- Abstract summary: We develop a framework called Robust CUR for large-scale component analysis problems.
We show the effectiveness and advantages of RTCUR against state methods.
- Score: 8.738540032356305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the tensor robust principal component analysis (TRPCA) problem, a
tensorial extension of matrix robust principal component analysis (RPCA), that
aims to split the given tensor into an underlying low-rank component and a
sparse outlier component. This work proposes a fast algorithm, called Robust
Tensor CUR Decompositions (RTCUR), for large-scale non-convex TRPCA problems
under the Tucker rank setting. RTCUR is developed within a framework of
alternating projections that projects between the set of low-rank tensors and
the set of sparse tensors. We utilize the recently developed tensor CUR
decomposition to substantially reduce the computational complexity in each
projection. In addition, we develop four variants of RTCUR for different
application settings. We demonstrate the effectiveness and computational
advantages of RTCUR against state-of-the-art methods on both synthetic and
real-world datasets.
Related papers
- Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - Scalable and Robust Tensor Ring Decomposition for Large-scale Data [12.02023514105999]
We propose a scalable and robust TR decomposition algorithm capable of handling large-scale tensor data with missing entries and gross corruptions.
We first develop a novel auto-weighted steepest descent method that can adaptively fill the missing entries and identify the outliers during the decomposition process.
arXiv Detail & Related papers (2023-05-15T22:08:47Z) - Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent [30.299284742925852]
This paper tackles tensor robust principal component analysis (RPCA)
It aims to recover a low-rank tensor from its observations contaminated by sparse corruptions.
We show that the proposed algorithm achieves better and more scalable performance than state-of-the-art matrix and tensor RPCA algorithms.
arXiv Detail & Related papers (2022-06-18T04:01:32Z) - Riemannian CUR Decompositions for Robust Principal Component Analysis [4.060731229044571]
Robust Principal Component Analysis (PCA) has received massive attention in recent years.
This paper proposes Robustian CUR, which is a robust PCA decomposition algorithm.
It is able to tolerate a significant amount of outliers, and is comparable to Accelerated Projections, which has high outlier tolerance but worse computational complexity than the proposed method.
arXiv Detail & Related papers (2022-06-17T22:58:09Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - TensoRF: Tensorial Radiance Fields [74.16791688888081]
We present TensoRF, a novel approach to model and reconstruct radiance fields.
We model the radiance field of a scene as a 4D tensor, which represents a 3D voxel grid with per-voxel multi-channel features.
We show that TensoRF with CP decomposition achieves fast reconstruction (30 min) with better rendering quality and even a smaller model size (4 MB) compared to NeRF.
arXiv Detail & Related papers (2022-03-17T17:59:59Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Fast Robust Tensor Principal Component Analysis via Fiber CUR
Decomposition [8.821527277034336]
We study the problem of tensor subtraction principal component analysis (TRPCA), which aims to separate an underlying low-multi-rank tensor and an outlier from their sum.
In work, we propose a fast non-linear decomposition algorithm, coined Robust CURCUR, for empirically sparse problems.
arXiv Detail & Related papers (2021-08-23T23:49:40Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Mode-wise Tensor Decompositions: Multi-dimensional Generalizations of
CUR Decompositions [9.280330114137778]
We study the characterization, perturbation analysis, and an efficient sampling strategy for two primary tensor CUR approximations, namely Chidori and Fiber CUR.
arXiv Detail & Related papers (2021-03-19T22:00:21Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.