Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent
- URL: http://arxiv.org/abs/2206.09109v1
- Date: Sat, 18 Jun 2022 04:01:32 GMT
- Title: Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent
- Authors: Harry Dong, Tian Tong, Cong Ma, Yuejie Chi
- Abstract summary: This paper tackles tensor robust principal component analysis (RPCA)
It aims to recover a low-rank tensor from its observations contaminated by sparse corruptions.
We show that the proposed algorithm achieves better and more scalable performance than state-of-the-art matrix and tensor RPCA algorithms.
- Score: 30.299284742925852
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An increasing number of data science and machine learning problems rely on
computation with tensors, which better capture the multi-way relationships and
interactions of data than matrices. When tapping into this critical advantage,
a key challenge is to develop computationally efficient and provably correct
algorithms for extracting useful information from tensor data that are
simultaneously robust to corruptions and ill-conditioning. This paper tackles
tensor robust principal component analysis (RPCA), which aims to recover a
low-rank tensor from its observations contaminated by sparse corruptions, under
the Tucker decomposition. To minimize the computation and memory footprints, we
propose to directly recover the low-dimensional tensor factors -- starting from
a tailored spectral initialization -- via scaled gradient descent (ScaledGD),
coupled with an iteration-varying thresholding operation to adaptively remove
the impact of corruptions. Theoretically, we establish that the proposed
algorithm converges linearly to the true low-rank tensor at a constant rate
that is independent with its condition number, as long as the level of
corruptions is not too large. Empirically, we demonstrate that the proposed
algorithm achieves better and more scalable performance than state-of-the-art
matrix and tensor RPCA algorithms through synthetic experiments and real-world
applications.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - Low-Tubal-Rank Tensor Recovery via Factorized Gradient Descent [22.801592340422157]
We propose an efficient and effective low-tubal-rank tensor recovery method based on a factorization procedure akin to the Burer-Monteiro method.
We provide rigorous theoretical analysis to ensure the convergence of FGD under both noise-free and noisy situations.
Our approach exhibits superior performance in multiple scenarios, in terms of the faster computational speed and the smaller convergence error.
arXiv Detail & Related papers (2024-01-22T13:30:11Z) - Scalable and Robust Tensor Ring Decomposition for Large-scale Data [12.02023514105999]
We propose a scalable and robust TR decomposition algorithm capable of handling large-scale tensor data with missing entries and gross corruptions.
We first develop a novel auto-weighted steepest descent method that can adaptively fill the missing entries and identify the outliers during the decomposition process.
arXiv Detail & Related papers (2023-05-15T22:08:47Z) - Fast Learnings of Coupled Nonnegative Tensor Decomposition Using Optimal Gradient and Low-rank Approximation [7.265645216663691]
We introduce a novel coupled nonnegative CANDECOMP/PARAFAC decomposition algorithm optimized by the alternating gradient method (CoNCPD-APG)
By integrating low-rank approximation with the proposed CoNCPD-APG method, the proposed algorithm can significantly decrease the computational burden without compromising decomposition quality.
arXiv Detail & Related papers (2023-02-10T08:49:36Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Fast Robust Tensor Principal Component Analysis via Fiber CUR
Decomposition [8.821527277034336]
We study the problem of tensor subtraction principal component analysis (TRPCA), which aims to separate an underlying low-multi-rank tensor and an outlier from their sum.
In work, we propose a fast non-linear decomposition algorithm, coined Robust CURCUR, for empirically sparse problems.
arXiv Detail & Related papers (2021-08-23T23:49:40Z) - Scaling and Scalability: Provable Nonconvex Low-Rank Tensor Estimation
from Incomplete Measurements [30.395874385570007]
A fundamental task is to faithfully recover tensors from highly incomplete measurements.
We develop an algorithm to directly recover the tensor factors in the Tucker decomposition.
We show that it provably converges at a linear independent rate of the ground truth tensor for two canonical problems.
arXiv Detail & Related papers (2021-04-29T17:44:49Z) - Investigating the Scalability and Biological Plausibility of the
Activation Relaxation Algorithm [62.997667081978825]
Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm.
We show that the algorithm can be further simplified and made more biologically plausible by introducing a learnable set of backwards weights.
We also investigate whether another biologically implausible assumption of the original AR algorithm -- the frozen feedforward pass -- can be relaxed without damaging performance.
arXiv Detail & Related papers (2020-10-13T08:02:38Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.