Partial Least Square Regression via Three-factor SVD-type Manifold
Optimization for EEG Decoding
- URL: http://arxiv.org/abs/2208.04324v1
- Date: Tue, 9 Aug 2022 11:57:02 GMT
- Title: Partial Least Square Regression via Three-factor SVD-type Manifold
Optimization for EEG Decoding
- Authors: Wanguang Yin, Zhichao Liang, Jianguo Zhang, Quanying Liu
- Abstract summary: We propose a new method to solve the partial least square regression, named PLSR via optimization on bi-Grassmann manifold (PLSRbiGr)
qlPLSRbiGr is validated with a variety of experiments for decoding EEG signals at motor imagery (MI) and steady-state visual evoked potential (SSVEP) task.
- Score: 4.0204191666595595
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Partial least square regression (PLSR) is a widely-used statistical model to
reveal the linear relationships of latent factors that comes from the
independent variables and dependent variables. However, traditional methods
\ql{ to solve PLSR models are usually based on the Euclidean space, and easily
getting} stuck into a local minimum. To this end, we propose a new method to
solve the partial least square regression, named PLSR via optimization on
bi-Grassmann manifold (PLSRbiGr). \ql{Specifically, we first leverage} the
three-factor SVD-type decomposition of the cross-covariance matrix defined on
the bi-Grassmann manifold, converting the orthogonal constrained optimization
problem into an unconstrained optimization problem on bi-Grassmann manifold,
and then incorporate the Riemannian preconditioning of matrix scaling to
regulate the Riemannian metric in each iteration. \ql{PLSRbiGr is validated}
with a variety of experiments for decoding EEG signals at motor imagery (MI)
and steady-state visual evoked potential (SSVEP) task. Experimental results
demonstrate that PLSRbiGr outperforms competing algorithms in multiple EEG
decoding tasks, which will greatly facilitate small sample data learning.
Related papers
- Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling [22.256068524699472]
In this work, we propose an Annealed Importance Sampling (AIS) approach to address these issues.
We combine the strengths of Sequential Monte Carlo samplers and VI to explore a wider range of posterior distributions and gradually approach the target distribution.
Experimental results on both toy and image datasets demonstrate that our method outperforms state-of-the-art methods in terms of tighter variational bounds, higher log-likelihoods, and more robust convergence.
arXiv Detail & Related papers (2024-08-13T08:09:05Z) - Sparse Tensor PCA via Tensor Decomposition for Unsupervised Feature Selection [8.391109286933856]
We develop two Sparse Principal Component Analysis (STPCA) models that utilize the projection directions in the factor matrices to perform unsupervised feature selection.
For both models, we prove the optimal solution of each subproblem falls onto the Hermitian Positive Semidefinite Cone (HPSD)
According to the experimental results, the two proposed methods are suitable for handling different data tensor scenarios and outperform the state-of-the-art UFS methods.
arXiv Detail & Related papers (2024-07-24T04:04:56Z) - Variable Substitution and Bilinear Programming for Aligning Partially Overlapping Point Sets [48.1015832267945]
This research presents a method to meet requirements through the minimization objective function of the RPM algorithm.
A branch-and-bound (BnB) algorithm is devised, which solely branches over the parameters, thereby boosting convergence rate.
Empirical evaluations demonstrate better robustness of the proposed methodology against non-rigid deformation, positional noise, and outliers, when compared with prevailing state-of-the-art transformations.
arXiv Detail & Related papers (2024-05-14T13:28:57Z) - An adaptive shortest-solution guided decimation approach to sparse
high-dimensional linear regression [2.3759847811293766]
ASSD is adapted from the shortest solution-guided algorithm and is referred to as ASSD.
ASSD is especially suitable for linear regression problems with highly correlated measurement matrices encountered in real-world applications.
arXiv Detail & Related papers (2022-11-28T04:29:57Z) - Vector-Valued Least-Squares Regression under Output Regularity
Assumptions [73.99064151691597]
We propose and analyse a reduced-rank method for solving least-squares regression problems with infinite dimensional output.
We derive learning bounds for our method, and study under which setting statistical performance is improved in comparison to full-rank method.
arXiv Detail & Related papers (2022-11-16T15:07:00Z) - Numerical Optimizations for Weighted Low-rank Estimation on Language
Model [73.12941276331316]
Singular value decomposition (SVD) is one of the most popular compression methods that approximates a target matrix with smaller matrices.
Standard SVD treats the parameters within the matrix with equal importance, which is a simple but unrealistic assumption.
We show that our method can perform better than current SOTA methods in neural-based language models.
arXiv Detail & Related papers (2022-11-02T00:58:02Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Reduction of the Number of Variables in Parametric Constrained
Least-Squares Problems [0.20305676256390928]
This paper proposes techniques for reducing the number of involved optimization variables.
We show the good performance of the proposed techniques in numerical tests and in a linearized MPC problem of a nonlinear benchmark process.
arXiv Detail & Related papers (2020-12-18T18:26:40Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.