Robust Principal Component Analysis: A Median of Means Approach
- URL: http://arxiv.org/abs/2102.03403v2
- Date: Thu, 20 Jul 2023 05:58:30 GMT
- Title: Robust Principal Component Analysis: A Median of Means Approach
- Authors: Debolina Paul, Saptarshi Chakraborty and Swagatam Das
- Abstract summary: Principal Component Analysis is a tool for data visualization, denoising, and dimensionality reduction.
Recent supervised learning methods have shown great success in dealing with outlying observations.
This paper proposes a PCA procedure based on the MoM principle.
- Score: 17.446104539598895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Principal Component Analysis (PCA) is a fundamental tool for data
visualization, denoising, and dimensionality reduction. It is widely popular in
Statistics, Machine Learning, Computer Vision, and related fields. However, PCA
is well-known to fall prey to outliers and often fails to detect the true
underlying low-dimensional structure within the dataset. Following the Median
of Means (MoM) philosophy, recent supervised learning methods have shown great
success in dealing with outlying observations without much compromise to their
large sample theoretical properties. This paper proposes a PCA procedure based
on the MoM principle. Called the \textbf{M}edian of \textbf{M}eans
\textbf{P}rincipal \textbf{C}omponent \textbf{A}nalysis (MoMPCA), the proposed
method is not only computationally appealing but also achieves optimal
convergence rates under minimal assumptions. In particular, we explore the
non-asymptotic error bounds of the obtained solution via the aid of the
Rademacher complexities while granting absolutely no assumption on the outlying
observations. The derived concentration results are not dependent on the
dimension because the analysis is conducted in a separable Hilbert space, and
the results only depend on the fourth moment of the underlying distribution in
the corresponding norm. The proposal's efficacy is also thoroughly showcased
through simulations and real data applications.
Related papers
- Assumption-Lean Post-Integrated Inference with Negative Control Outcomes [0.0]
We introduce a robust post-integrated inference (PII) method that adjusts for latent heterogeneity using negative control outcomes.
Our method extends to projected direct effect estimands, accounting for hidden mediators, confounders, and moderators.
The proposed doubly robust estimators are consistent and efficient under minimal assumptions and potential misspecification.
arXiv Detail & Related papers (2024-10-07T12:52:38Z) - Sparse PCA with Oracle Property [115.72363972222622]
We propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations.
We prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA.
arXiv Detail & Related papers (2023-12-28T02:52:54Z) - Support Recovery in Sparse PCA with Non-Random Missing Data [27.3669650952144]
We analyze a practical algorithm for sparse PCA on incomplete and noisy data under a general non-random sampling scheme.
We provide theoretical justification that under certain conditions, we can recover the support of the sparse leading eigenvector with high probability.
We show that our algorithm outperforms several other sparse PCA approaches especially when the observed entries have good structural properties.
arXiv Detail & Related papers (2023-02-03T04:20:25Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Support Recovery in Sparse PCA with Incomplete Data [27.3669650952144]
We use a practical algorithm for principal component analysis (PCA) incomplete data.
We provide theoretical and experimental evidence that the unknown true SDP enables exactly recover an incomplete support matrix.
We validate our theoretical results with incomplete data, and show encouraging meaningful results on a variance expression.
arXiv Detail & Related papers (2022-05-30T16:17:39Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Keep it Tighter -- A Story on Analytical Mean Embeddings [0.6445605125467574]
Kernel techniques are among the most popular and flexible approaches in data science.
Mean embedding gives rise to a divergence measure referred to as maximum mean discrepancy (MMD)
In this paper we focus on the problem of MMD estimation when the mean embedding of one of the underlying distributions is available analytically.
arXiv Detail & Related papers (2021-10-15T21:29:27Z) - Entropy Minimizing Matrix Factorization [102.26446204624885]
Nonnegative Matrix Factorization (NMF) is a widely-used data analysis technique, and has yielded impressive results in many real-world tasks.
In this study, an Entropy Minimizing Matrix Factorization framework (EMMF) is developed to tackle the above problem.
Considering that the outliers are usually much less than the normal samples, a new entropy loss function is established for matrix factorization.
arXiv Detail & Related papers (2021-03-24T21:08:43Z) - Continuous Wasserstein-2 Barycenter Estimation without Minimax
Optimization [94.18714844247766]
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport.
We present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures.
arXiv Detail & Related papers (2021-02-02T21:01:13Z) - Modal Principal Component Analysis [3.050919759387985]
It has been shown that the robustness of many statistical methods can be improved using mode estimation instead of mean estimation.
This study proposes a modal principal component analysis (MPCA) which is a robust PCA method based on mode estimation.
arXiv Detail & Related papers (2020-08-07T23:59:05Z) - A Robust Functional EM Algorithm for Incomplete Panel Count Data [66.07942227228014]
We propose a functional EM algorithm to estimate the counting process mean function under a missing completely at random assumption (MCAR)
The proposed algorithm wraps several popular panel count inference methods, seamlessly deals with incomplete counts and is robust to misspecification of the Poisson process assumption.
We illustrate the utility of the proposed algorithm through numerical experiments and an analysis of smoking cessation data.
arXiv Detail & Related papers (2020-03-02T20:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.