Learned Robust PCA: A Scalable Deep Unfolding Approach for
High-Dimensional Outlier Detection
- URL: http://arxiv.org/abs/2110.05649v1
- Date: Mon, 11 Oct 2021 23:37:55 GMT
- Title: Learned Robust PCA: A Scalable Deep Unfolding Approach for
High-Dimensional Outlier Detection
- Authors: HanQin Cai, Jialin Liu, Wotao Yin
- Abstract summary: Robust principal component analysis is a critical tool in machine learning, which detects outliers in the task of low-rank reconstruction.
In this paper, we propose a scalable and learnable approach for high-dimensional RPCA problems which we call LRPCA.
We show that LRPCA outperforms the state-of-the-art RPCA algorithms, such as neurald AltProj, on both datasets real-world applications.
- Score: 23.687598836093333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust principal component analysis (RPCA) is a critical tool in modern
machine learning, which detects outliers in the task of low-rank matrix
reconstruction. In this paper, we propose a scalable and learnable non-convex
approach for high-dimensional RPCA problems, which we call Learned Robust PCA
(LRPCA). LRPCA is highly efficient, and its free parameters can be effectively
learned to optimize via deep unfolding. Moreover, we extend deep unfolding from
finite iterations to infinite iterations via a novel
feedforward-recurrent-mixed neural network model. We establish the recovery
guarantee of LRPCA under mild assumptions for RPCA. Numerical experiments show
that LRPCA outperforms the state-of-the-art RPCA algorithms, such as ScaledGD
and AltProj, on both synthetic datasets and real-world applications.
Related papers
- WARP-LCA: Efficient Convolutional Sparse Coding with Locally Competitive Algorithm [1.4186974630564675]
We show that WARP-LCA converges faster by orders of magnitude and reaches better minima compared to conventional LCA.
We demonstrate that WARP-LCA exhibits superior properties in terms of reconstruction and denoising quality as well as robustness when applied in deep recognition pipelines.
arXiv Detail & Related papers (2024-10-24T14:47:36Z) - Deep Unrolling for Nonconvex Robust Principal Component Analysis [75.32013242448151]
We design algorithms for Robust Component Analysis (A)
It consists in decomposing a matrix into the sum of a low Principaled matrix and a sparse Principaled matrix.
arXiv Detail & Related papers (2023-07-12T03:48:26Z) - ASR: Attention-alike Structural Re-parameterization [53.019657810468026]
We propose a simple-yet-effective attention-alike structural re- parameterization (ASR) that allows us to achieve SRP for a given network while enjoying the effectiveness of the attention mechanism.
In this paper, we conduct extensive experiments from a statistical perspective and discover an interesting phenomenon Stripe Observation, which reveals that channel attention values quickly approach some constant vectors during training.
arXiv Detail & Related papers (2023-04-13T08:52:34Z) - Deep Unfolded Tensor Robust PCA with Self-supervised Learning [21.710932587432396]
We describe a fast and simple self-supervised model for tensor RPCA using deep unfolding.
Our model expunges the need for ground truth labels while maintaining competitive or even greater performance.
We demonstrate these claims on a mix of synthetic data and real-world tasks.
arXiv Detail & Related papers (2022-12-21T20:34:42Z) - An online algorithm for contrastive Principal Component Analysis [9.090031210111919]
We derive an online algorithm for cPCA* and show that it maps onto a neural network with local learning rules, so it can potentially be implemented in energy efficient neuromorphic hardware.
We evaluate the performance of our online algorithm on real datasets and highlight the differences and similarities with the original formulation.
arXiv Detail & Related papers (2022-11-14T19:48:48Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - Angular Embedding: A New Angular Robust Principal Component Analysis [10.120548476934186]
It is a serious problem that principal component analysis is sensitive to outliers.
The existing state-of-the-art RPCA approaches cannot easily remove or tolerate outliers by a non-iterative manner.
This paper proposes Angular Embedding (AE) to formulate a straightforward RPCA approach based on angular density.
Furthermore, a trimmed AE (TAE) is introduced to deal with data with large scale outliers.
arXiv Detail & Related papers (2020-11-22T13:36:56Z) - Deep-RLS: A Model-Inspired Deep Learning Approach to Nonlinear PCA [12.629088975832797]
We propose a task-based deep learning approach, referred to as Deep-RLS, to perform nonlinear PCA.
In particular, we formulate the nonlinear PCA for the blind source separation (BSS) problem and show through numerical analysis that Deep-RLS results in a significant improvement in the accuracy of recovering the source signals.
arXiv Detail & Related papers (2020-11-15T06:05:51Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.