Novel sparse PCA method via Runge Kutta numerical method(s) for face recognition
- URL: http://arxiv.org/abs/2504.01035v1
- Date: Sun, 30 Mar 2025 13:34:06 GMT
- Title: Novel sparse PCA method via Runge Kutta numerical method(s) for face recognition
- Authors: Loc Hoang Tran, Luong Anh Tuan Nguyen,
- Abstract summary: This paper explores the implementation of sparse Principal Component Analysis (PCA) using the Proximal Gradient method and the Runge-Kutta numerical methods.<n> Experimental results demonstrate that combining sparse PCA-solved via the Proximal Gradient method or the Runge-Kutta numerical approach-with a classification system yields higher accuracy compared to standard PCA.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Face recognition is a crucial topic in data science and biometric security, with applications spanning military, finance, and retail industries. This paper explores the implementation of sparse Principal Component Analysis (PCA) using the Proximal Gradient method (also known as ISTA) and the Runge-Kutta numerical methods. To address the face recognition problem, we integrate sparse PCA with either the k-nearest neighbor method or the kernel ridge regression method. Experimental results demonstrate that combining sparse PCA-solved via the Proximal Gradient method or the Runge-Kutta numerical approach-with a classification system yields higher accuracy compared to standard PCA. Additionally, we observe that the Runge-Kutta-based sparse PCA computation consistently outperforms the Proximal Gradient method in terms of speed.
Related papers
- Solve sparse PCA problem by employing Hamiltonian system and leapfrog method [0.0]
We propose a novel sparse PCA algorithm that imposes sparsity through a smooth L1 penalty.<n> Experimental evaluations on a face recognition dataset-using both k-nearest neighbor and kernel ridge regressions-demonstrate that the proposed sparse PCA methods consistently achieve higher classification accuracy than conventional PCA.
arXiv Detail & Related papers (2025-03-30T06:39:11Z) - Achieving $\widetilde{\mathcal{O}}(\sqrt{T})$ Regret in Average-Reward POMDPs with Known Observation Models [56.92178753201331]
We tackle average-reward infinite-horizon POMDPs with an unknown transition model.
We present a novel and simple estimator that overcomes this barrier.
arXiv Detail & Related papers (2025-01-30T22:29:41Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Efficient fair PCA for fair representation learning [21.990310743597174]
We propose a conceptually simple approach that allows for an analytic solution similar to standard PCA and can be kernelized.
Our methods have the same complexity as standard PCA, or kernel PCA, and run much faster than existing methods for fair PCA based on semidefinite programming or manifold optimization.
arXiv Detail & Related papers (2023-02-26T13:34:43Z) - Policy Gradient for Rectangular Robust Markov Decision Processes [62.397882389472564]
We introduce robust policy gradient (RPG), a policy-based method that efficiently solves rectangular robust Markov decision processes (MDPs)
Our resulting RPG can be estimated from data with the same time complexity as its non-robust equivalent.
arXiv Detail & Related papers (2023-01-31T12:40:50Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - Stochastic and Private Nonconvex Outlier-Robust PCA [11.688030627514532]
Outlier-robust PCA seeks an underlying low-dimensional linear subspace from a dataset corrupted with outliers.
We show that our methods involve our methods, which involve a geodesic descent and a novel convergence analysis.
The main application method is an effectively private algorithm for outlier-robust PCA.
arXiv Detail & Related papers (2022-03-17T12:00:47Z) - Improved sparse PCA method for face and image recognition [0.2964978357715083]
The accuracy of the combination of the sparse PCA method and one specific classification system may be lower than the accuracy of the combination of the PCA method and one specific classification system.
We recognize that the process computing the sparse PCA algorithm using the FISTA method is always faster than the process computing the sparse PCA algorithm using the proximal gradient method.
arXiv Detail & Related papers (2021-12-01T01:11:04Z) - Kernel PCA with the Nystr\"om method [0.0]
We derive kernel PCA with the Nystr"om method and study its accuracy.
We present kernel principal component regression with the Nystr"om method.
arXiv Detail & Related papers (2021-09-12T18:08:31Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.