Learnable Scaled Gradient Descent for Guaranteed Robust Tensor PCA
- URL: http://arxiv.org/abs/2501.04565v2
- Date: Fri, 17 Jan 2025 05:13:06 GMT
- Title: Learnable Scaled Gradient Descent for Guaranteed Robust Tensor PCA
- Authors: Lanlan Feng, Ce Zhu, Yipeng Liu, Saiprasad Ravishankar, Longxiu Huang,
- Abstract summary: We propose an efficient scaled gradient descent (SGD) approach within the t-SVD framework for the first time.<n>We show that RTPCA-SGD achieves linear convergence to the true low-rank tensor at a constant rate, independent of the condition number.
- Score: 39.084456109467204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robust tensor principal component analysis (RTPCA) aims to separate the low-rank and sparse components from multi-dimensional data, making it an essential technique in the signal processing and computer vision fields. Recently emerging tensor singular value decomposition (t-SVD) has gained considerable attention for its ability to better capture the low-rank structure of tensors compared to traditional matrix SVD. However, existing methods often rely on the computationally expensive tensor nuclear norm (TNN), which limits their scalability for real-world tensors. To address this issue, we explore an efficient scaled gradient descent (SGD) approach within the t-SVD framework for the first time, and propose the RTPCA-SGD method. Theoretically, we rigorously establish the recovery guarantees of RTPCA-SGD under mild assumptions, demonstrating that with appropriate parameter selection, it achieves linear convergence to the true low-rank tensor at a constant rate, independent of the condition number. To enhance its practical applicability, we further propose a learnable self-supervised deep unfolding model, which enables effective parameter learning. Numerical experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed methods while maintaining competitive computational efficiency, especially consuming less time than RTPCA-TNN.
Related papers
- Outlier-aware Tensor Robust Principal Component Analysis with Self-guided Data Augmentation [21.981038455329013]
We propose a self-guided data augmentation approach that employs adaptive weighting to suppress outlier influence.
We show the improvements in both accuracy and computational efficiency compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-04-25T13:03:35Z) - MNT-TNN: Spatiotemporal Traffic Data Imputation via Compact Multimode Nonlinear Transform-based Tensor Nuclear Norm [49.15782245370261]
imputation of random or non-random missing data is a crucial application for Intelligent Transportation Systems (ITS)
We propose a novel imputation method, Multimode Transformed Nuclear Norm Families (ATTNNs)
Our proposed MNT-TNN and ATTNNs can outperform the benchmark compared state-of-the-art imputation methods.
arXiv Detail & Related papers (2025-03-29T02:58:31Z) - tCURLoRA: Tensor CUR Decomposition Based Low-Rank Parameter Adaptation for Medical Image Segmentation [1.3281936946796913]
Transfer learning, by leveraging knowledge from pre-trained models, has significantly enhanced the performance of target tasks.<n>As deep neural networks scale up, full fine-tuning introduces substantial computational and storage challenges.<n>We propose tCURLoRA, a novel fine-tuning method based on tensor CUR decomposition.
arXiv Detail & Related papers (2025-01-04T08:25:32Z) - Variational Bayesian Inference for Tensor Robust Principal Component Analysis [2.6623354466198412]
Current approaches often encounter difficulties in accurately capturing the low-rank properties of tensors.<n>We introduce a Bayesian framework for TRPCA, which integrates a low-rank tensor nuclear norm prior and a generalized sparsity-inducing prior.<n>Our method can be efficiently extended to the weighted tensor nuclear norm model.
arXiv Detail & Related papers (2024-12-25T00:29:09Z) - Deep Unfolded Tensor Robust PCA with Self-supervised Learning [21.710932587432396]
We describe a fast and simple self-supervised model for tensor RPCA using deep unfolding.
Our model expunges the need for ground truth labels while maintaining competitive or even greater performance.
We demonstrate these claims on a mix of synthetic data and real-world tasks.
arXiv Detail & Related papers (2022-12-21T20:34:42Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent [30.299284742925852]
This paper tackles tensor robust principal component analysis (RPCA)
It aims to recover a low-rank tensor from its observations contaminated by sparse corruptions.
We show that the proposed algorithm achieves better and more scalable performance than state-of-the-art matrix and tensor RPCA algorithms.
arXiv Detail & Related papers (2022-06-18T04:01:32Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Efficient Structure-preserving Support Tensor Train Machine [0.0]
Train Multi-way Multi-level Kernel (TT-MMK)
We develop the Train Multi-way Multi-level Kernel (TT-MMK), which combines the simplicity of the Polyadic decomposition, the classification power of the Dual Structure-preserving Support Machine, and the reliability of the Train Vector approximation.
We show by experiments that the TT-MMK method is usually more reliable, less sensitive to tuning parameters, and gives higher prediction accuracy in the SVM classification when benchmarked against other state-of-the-art techniques.
arXiv Detail & Related papers (2020-02-12T16:35:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.