Robust Singular Values based on L1-norm PCA
- URL: http://arxiv.org/abs/2210.12097v1
- Date: Fri, 21 Oct 2022 16:42:49 GMT
- Title: Robust Singular Values based on L1-norm PCA
- Authors: Duc Le, Panos P. Markopoulos
- Abstract summary: Singular-Value Decomposition (SVD) is a ubiquitous data analysis method in engineering, science, and statistics.
We present a novel robust non-parametric method for SVD and singular-value estimation based on a L1-norm (sum of absolute values) formulation.
- Score: 11.706222361809374
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Singular-Value Decomposition (SVD) is a ubiquitous data analysis method in
engineering, science, and statistics. Singular-value estimation, in particular,
is of critical importance in an array of engineering applications, such as
channel estimation in communication systems, electromyography signal analysis,
and image compression, to name just a few. Conventional SVD of a data matrix
coincides with standard Principal-Component Analysis (PCA). The L2-norm (sum of
squared values) formulation of PCA promotes peripheral data points and, thus,
makes PCA sensitive against outliers. Naturally, SVD inherits this outlier
sensitivity. In this work, we present a novel robust non-parametric method for
SVD and singular-value estimation based on a L1-norm (sum of absolute values)
formulation, which we name L1-cSVD. Accordingly, the proposed method
demonstrates sturdy resistance against outliers and can facilitate more
reliable data analysis and processing in a wide range of engineering
applications.
Related papers
- Geometry-Aware Instrumental Variable Regression [56.16884466478886]
We propose a transport-based IV estimator that takes into account the geometry of the data manifold through data-derivative information.
We provide a simple plug-and-play implementation of our method that performs on par with related estimators in standard settings.
arXiv Detail & Related papers (2024-05-19T17:49:33Z) - Robust SVD Made Easy: A fast and reliable algorithm for large-scale data
analysis [0.0]
Existing robust SVD algorithms often sacrifice speed for robustness or fail in the presence of only a few outliers.
This study introduces an efficient algorithm, called Spherically Normalized SVD, for robust SVD approximation.
The proposed algorithm achieves remarkable speed by utilizing only two applications of a standard reduced-rank SVD algorithm.
arXiv Detail & Related papers (2024-02-15T07:08:11Z) - On the Noise Sensitivity of the Randomized SVD [8.98526174345299]
The randomized singular value decomposition (R-SVD) is a popular sketching-based algorithm for efficiently computing the partial SVD of a large matrix.
We analyze the R-SVD under a low-rank signal plus noise measurement model.
The singular values produced by the R-SVD are shown to exhibit a BBP-like phase transition.
arXiv Detail & Related papers (2023-05-27T10:15:17Z) - Numerical Optimizations for Weighted Low-rank Estimation on Language
Model [73.12941276331316]
Singular value decomposition (SVD) is one of the most popular compression methods that approximates a target matrix with smaller matrices.
Standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption.
We show that our method can perform better than current SOTA methods in neural-based language models.
arXiv Detail & Related papers (2022-11-02T00:58:02Z) - Local manifold learning and its link to domain-based physics knowledge [53.15471241298841]
In many reacting flow systems, the thermo-chemical state-space is assumed to evolve close to a low-dimensional manifold (LDM)
We show that PCA applied in local clusters of data (local PCA) is capable of detecting the intrinsic parameterization of the thermo-chemical state-space.
arXiv Detail & Related papers (2022-07-01T09:06:25Z) - coVariance Neural Networks [119.45320143101381]
Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning.
We propose a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs.
We show that VNN performance is indeed more stable than PCA-based statistical approaches.
arXiv Detail & Related papers (2022-05-31T15:04:43Z) - Robust factored principal component analysis for matrix-valued outlier
accommodation and detection [4.228971753938522]
Factored PCA (FPCA) is a probabilistic extension of PCA for matrix data.
We propose a robust extension of FPCA (RFPCA) for matrix data.
RFPCA can adaptively down-weight outliers and yield robust estimates.
arXiv Detail & Related papers (2021-12-13T16:12:22Z) - Differential privacy and robust statistics in high dimensions [49.50869296871643]
High-dimensional Propose-Test-Release (HPTR) builds upon three crucial components: the exponential mechanism, robust statistics, and the Propose-Test-Release mechanism.
We show that HPTR nearly achieves the optimal sample complexity under several scenarios studied in the literature.
arXiv Detail & Related papers (2021-11-12T06:36:40Z) - Supervised Linear Dimension-Reduction Methods: Review, Extensions, and
Comparisons [6.71092092685492]
Principal component analysis (PCA) is a well-known linear dimension-reduction method that has been widely used in data analysis and modeling.
This paper reviews selected techniques, extends some of them, and compares their performance through simulations.
Two of these techniques, partial least squares (PLS) and least-squares PCA (LSPCA), consistently outperform the others in this study.
arXiv Detail & Related papers (2021-09-09T17:57:25Z) - Regularisation for PCA- and SVD-type matrix factorisations [0.0]
Singular Value Decomposition (SVD) and its close relative, Principal Component Analysis (PCA) are well-known linear matrix decomposition techniques.
In this paper, we take another look at the problem of regularisation and show that different formulations of the minimisation problem lead to qualitatively different solutions.
arXiv Detail & Related papers (2021-06-24T12:25:12Z) - Stochastic Approximation for Online Tensorial Independent Component
Analysis [98.34292831923335]
Independent component analysis (ICA) has been a popular dimension reduction tool in statistical machine learning and signal processing.
In this paper, we present a by-product online tensorial algorithm that estimates for each independent component.
arXiv Detail & Related papers (2020-12-28T18:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.