An Inverse-free Truncated Rayleigh-Ritz Method for Sparse Generalized
Eigenvalue Problem
- URL: http://arxiv.org/abs/2003.10897v1
- Date: Tue, 24 Mar 2020 14:53:55 GMT
- Title: An Inverse-free Truncated Rayleigh-Ritz Method for Sparse Generalized
Eigenvalue Problem
- Authors: Yunfeng Cai and Ping Li
- Abstract summary: inverse-free truncated Rayleigh-Ritz method (em IFTRR) developed to efficiently solve sparse generalized eigenvalue problem.
New truncation strategy is proposed, which is able to find the support set of the leading eigenvector effectively.
- Score: 28.83358353043287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers the sparse generalized eigenvalue problem (SGEP), which
aims to find the leading eigenvector with at most $k$ nonzero entries. SGEP
naturally arises in many applications in machine learning, statistics, and
scientific computing, for example, the sparse principal component analysis
(SPCA), the sparse discriminant analysis (SDA), and the sparse canonical
correlation analysis (SCCA). In this paper, we focus on the development of a
three-stage algorithm named {\em inverse-free truncated Rayleigh-Ritz method}
({\em IFTRR}) to efficiently solve SGEP. In each iteration of IFTRR, only a
small number of matrix-vector products is required. This makes IFTRR
well-suited for large scale problems. Particularly, a new truncation strategy
is proposed, which is able to find the support set of the leading eigenvector
effectively. Theoretical results are developed to explain why IFTRR works well.
Numerical simulations demonstrate the merits of IFTRR.
Related papers
- Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Multi-Irreducible Spectral Synchronization for Robust Rotation Averaging [1.2289361708127877]
We show how to estimate a set of unknown orientations $R_1,..., R_N in SO$, given noisy measurements.
Results indicate how to devise measurement networks that are emph guaranteed to achieve accurate estimation.
arXiv Detail & Related papers (2023-11-28T06:25:26Z) - Error Bounds for Learning with Vector-Valued Random Features [2.375038919274297]
This paper provides a comprehensive error analysis of learning with vector-valued random features (RF)
The theory is developed for RF ridge regression in a fully general infinite-dimensional input-output setting.
arXiv Detail & Related papers (2023-05-26T18:00:08Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Partial Least Square Regression via Three-factor SVD-type Manifold
Optimization for EEG Decoding [4.0204191666595595]
We propose a new method to solve the partial least square regression, named PLSR via optimization on bi-Grassmann manifold (PLSRbiGr)
qlPLSRbiGr is validated with a variety of experiments for decoding EEG signals at motor imagery (MI) and steady-state visual evoked potential (SSVEP) task.
arXiv Detail & Related papers (2022-08-09T11:57:02Z) - Entrywise Recovery Guarantees for Sparse PCA via Sparsistent Algorithms [9.112172220055431]
We provide entrywise $ell_2,infty$ bounds for Sparse PCA under a general high-dimensional subgaussian design.
Our results hold for any algorithm that selects the correct support with high probability, those that are sparsistent.
arXiv Detail & Related papers (2022-02-08T18:50:35Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Minimax Estimation of Linear Functions of Eigenvectors in the Face of
Small Eigen-Gaps [95.62172085878132]
Eigenvector perturbation analysis plays a vital role in various statistical data science applications.
We develop a suite of statistical theory that characterizes the perturbation of arbitrary linear functions of an unknown eigenvector.
In order to mitigate a non-negligible bias issue inherent to the natural "plug-in" estimator, we develop de-biased estimators.
arXiv Detail & Related papers (2021-04-07T17:55:10Z) - Analysis of Truncated Orthogonal Iteration for Sparse Eigenvector
Problems [78.95866278697777]
We propose two variants of the Truncated Orthogonal Iteration to compute multiple leading eigenvectors with sparsity constraints simultaneously.
We then apply our algorithms to solve the sparse principle component analysis problem for a wide range of test datasets.
arXiv Detail & Related papers (2021-03-24T23:11:32Z) - Proximal and Federated Random Reshuffling [11.83842808044211]
We propose two new algorithms for Random Reshuffling.
ProxRR and FedRR solve composite convex finite-sum minimization problems.
ProxRR is faster than algorithms that evaluate the proximal operator in every iteration.
arXiv Detail & Related papers (2021-02-12T18:59:24Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.