Fast Sparse PCA via Positive Semidefinite Projection for Unsupervised
Feature Selection
- URL: http://arxiv.org/abs/2309.06202v1
- Date: Tue, 12 Sep 2023 13:10:06 GMT
- Title: Fast Sparse PCA via Positive Semidefinite Projection for Unsupervised
Feature Selection
- Authors: Junjing Zheng, Xinyu Zhang, Yongxiang Liu, Weidong Jiang, Kai Huo, Li
Liu
- Abstract summary: It's proved that the solution to a convex SPCA falls onto the Positive Semidefinite (PSD) cone.
A two-step fast projection is presented to solve the proposed problem.
- Score: 17.631455813775705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of unsupervised feature selection, sparse principal component
analysis (SPCA) methods have attracted more and more attention recently.
Compared to spectral-based methods, SPCA methods don't rely on the construction
of a similarity matrix and show better feature selection ability on real-world
data. The original SPCA formulates a nonconvex optimization problem. Existing
convex SPCA methods reformulate SPCA as a convex model by regarding the
reconstruction matrix as an optimization variable. However, they are lack of
constraints equivalent to the orthogonality restriction in SPCA, leading to
larger solution space. In this paper, it's proved that the optimal solution to
a convex SPCA model falls onto the Positive Semidefinite (PSD) cone. A standard
convex SPCA-based model with PSD constraint for unsupervised feature selection
is proposed. Further, a two-step fast optimization algorithm via PSD projection
is presented to solve the proposed model. Two other existing convex SPCA-based
models are also proven to have their solutions optimized on the PSD cone in
this paper. Therefore, the PSD versions of these two models are proposed to
accelerate their convergence as well. We also provide a regularization
parameter setting strategy for our proposed method. Experiments on synthetic
and real-world datasets demonstrate the effectiveness and efficiency of the
proposed methods.
Related papers
- Sparse Tensor PCA via Tensor Decomposition for Unsupervised Feature Selection [8.391109286933856]
We develop two Sparse Principal Component Analysis (STPCA) models that utilize the projection directions in the factor matrices to perform unsupervised feature selection.
For both models, we prove the optimal solution of each subproblem falls onto the Hermitian Positive Semidefinite Cone (HPSD)
According to the experimental results, the two proposed methods are suitable for handling different data tensor scenarios and outperform the state-of-the-art UFS methods.
arXiv Detail & Related papers (2024-07-24T04:04:56Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Constrained Bayesian Optimization Under Partial Observations: Balanced
Improvements and Provable Convergence [6.461785985849886]
We endeavor to design an efficient and provable method for expensive POCOPs under the framework of constrained Bayesian optimization.
We present an improved design of the acquisition functions that introduces balanced exploration during optimization.
We propose a Gaussian process embedding different likelihoods as the surrogate model for a partially observable constraint.
arXiv Detail & Related papers (2023-12-06T01:00:07Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - A Provably Efficient Model-Free Posterior Sampling Method for Episodic
Reinforcement Learning [50.910152564914405]
Existing posterior sampling methods for reinforcement learning are limited by being model-based or lack worst-case theoretical guarantees beyond linear MDPs.
This paper proposes a new model-free formulation of posterior sampling that applies to more general episodic reinforcement learning problems with theoretical guarantees.
arXiv Detail & Related papers (2022-08-23T12:21:01Z) - Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model
Classes and Cone Decompositions [41.337814204665364]
We develop algorithms for convex optimization of two-layer neural networks with ReLU activation functions.
We show that convex gated ReLU models obtain data-dependent approximation bounds for the ReLU training problem.
arXiv Detail & Related papers (2022-02-02T23:50:53Z) - Spike and slab Bayesian sparse principal component analysis [0.6599344783327054]
We propose a novel parameter-expanded coordinate ascent variational inference (PX-CAVI) algorithm.
We demonstrate that the PX-CAVI algorithm outperforms two popular SPCA approaches.
The algorithm is then applied to study a lung cancer gene expression dataset.
arXiv Detail & Related papers (2021-01-30T20:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.