Structural Effect and Spectral Enhancement of High-Dimensional Regularized Linear Discriminant Analysis
- URL: http://arxiv.org/abs/2507.16682v1
- Date: Tue, 22 Jul 2025 15:16:48 GMT
- Title: Structural Effect and Spectral Enhancement of High-Dimensional Regularized Linear Discriminant Analysis
- Authors: Yonghan Zhang, Zhangni Pu, Lu Yan, Jiang Hu,
- Abstract summary: Regularized linear discriminant analysis (RLDA) is a widely used tool for classification and dimensionality reduction.<n>Existing theoretical analyses of RLDA often lack clear insight into how data structure affects classification performance.<n>We propose the Spectral Enhanced Discriminant Analysis (SEDA) algorithm, which achieves higher classification accuracy and dimensionality reduction.
- Score: 3.0517619877113358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regularized linear discriminant analysis (RLDA) is a widely used tool for classification and dimensionality reduction, but its performance in high-dimensional scenarios is inconsistent. Existing theoretical analyses of RLDA often lack clear insight into how data structure affects classification performance. To address this issue, we derive a non-asymptotic approximation of the misclassification rate and thus analyze the structural effect and structural adjustment strategies of RLDA. Based on this, we propose the Spectral Enhanced Discriminant Analysis (SEDA) algorithm, which optimizes the data structure by adjusting the spiked eigenvalues of the population covariance matrix. By developing a new theoretical result on eigenvectors in random matrix theory, we derive an asymptotic approximation on the misclassification rate of SEDA. The bias correction algorithm and parameter selection strategy are then obtained. Experiments on synthetic and real datasets show that SEDA achieves higher classification accuracy and dimensionality reduction compared to existing LDA methods.
Related papers
- Nonparametric Linear Discriminant Analysis for High Dimensional Matrix-Valued Data [0.0]
We propose a novel extension of Fisher's Linear Discriminant Analysis (LDA) tailored for matrix-valued observations.<n>We adopt a nonparametric empirical Bayes framework based on Non Maximum Likelihood Estimation (NPMLE)<n>Our method is effectively generalized to the matrix setting, thereby improving classification performance.
arXiv Detail & Related papers (2025-07-25T07:30:24Z) - Linear Discriminant Analysis with Gradient Optimization on Covariance Inverse [4.872570541276082]
Linear discriminant analysis (LDA) is a fundamental method in statistical pattern recognition and classification.<n>In this work, we propose LDA with gradient optimization (LDA-GO), a new approach that directly optimize the inverse covariance matrix via gradient descent.<n>The algorithm parametrizes the inverse covariance matrix through Cholesky factorization, incorporates a low-rank extension to reduce computational complexity, and considers a multiple-initialization strategy.
arXiv Detail & Related papers (2025-06-07T15:50:43Z) - Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Regularized Linear Discriminant Analysis Using a Nonlinear Covariance
Matrix Estimator [11.887333567383239]
Linear discriminant analysis (LDA) is a widely used technique for data classification.
LDA becomes inefficient when the data covariance matrix is ill-conditioned.
Regularized LDA methods have been proposed to cope with such a situation.
arXiv Detail & Related papers (2024-01-31T11:37:14Z) - A Bi-level Nonlinear Eigenvector Algorithm for Wasserstein Discriminant
Analysis [3.4806267677524896]
Wasserstein discriminant analysis (WDA) is a linear dimensionality reduction method.
WDA can account for both global and local interconnections between data classes.
A bi-level nonlinear eigenvector algorithm (WDA-nepv) is presented.
arXiv Detail & Related papers (2022-11-21T22:40:43Z) - Spectrally-Corrected and Regularized Linear Discriminant Analysis for
Spiked Covariance Model [2.517838307493912]
This paper proposes an improved linear discriminant analysis called spectrally-corrected and regularized LDA (SRLDA)
It is proved that SRLDA has a linear classification global optimal solution under the spiked model assumption.
Experiments on different data sets show that the SRLDA algorithm performs better in classification and dimensionality reduction than currently used tools.
arXiv Detail & Related papers (2022-10-08T00:47:50Z) - Weight Vector Tuning and Asymptotic Analysis of Binary Linear
Classifiers [82.5915112474988]
This paper proposes weight vector tuning of a generic binary linear classifier through the parameterization of a decomposition of the discriminant by a scalar.
It is also found that weight vector tuning significantly improves the performance of Linear Discriminant Analysis (LDA) under high estimation noise.
arXiv Detail & Related papers (2021-10-01T17:50:46Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.