GO-LDA: Generalised Optimal Linear Discriminant Analysis
- URL: http://arxiv.org/abs/2305.14568v1
- Date: Tue, 23 May 2023 23:11:05 GMT
- Title: GO-LDA: Generalised Optimal Linear Discriminant Analysis
- Authors: Jiahui Liu, Xiaohao Cai, and Mahesan Niranjan
- Abstract summary: Linear discriminant analysis has been a useful tool in pattern recognition and data analysis research and practice.
We show that the generalised eigenanalysis solution to multiclass LDA does neither yield orthogonal discriminant directions nor maximise discrimination of projected data along them.
- Score: 6.644357197885522
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Linear discriminant analysis (LDA) has been a useful tool in pattern
recognition and data analysis research and practice. While linearity of class
boundaries cannot always be expected, nonlinear projections through pre-trained
deep neural networks have served to map complex data onto feature spaces in
which linear discrimination has served well. The solution to binary LDA is
obtained by eigenvalue analysis of within-class and between-class scatter
matrices. It is well known that the multiclass LDA is solved by an extension to
the binary LDA, a generalised eigenvalue problem, from which the largest
subspace that can be extracted is of dimension one lower than the number of
classes in the given problem. In this paper, we show that, apart from the first
of the discriminant directions, the generalised eigenanalysis solution to
multiclass LDA does neither yield orthogonal discriminant directions nor
maximise discrimination of projected data along them. Surprisingly, to the best
of our knowledge, this has not been noted in decades of literature on LDA. To
overcome this drawback, we present a derivation with a strict theoretical
support for sequentially obtaining discriminant directions that are orthogonal
to previously computed ones and maximise in each step the Fisher criterion. We
show distributions of projections along these axes and demonstrate that
discrimination of data projected onto these discriminant directions has optimal
separation, which is much higher than those from the generalised eigenvectors
of the multiclass LDA. Using a wide range of benchmark tasks, we present a
comprehensive empirical demonstration that on a number of pattern recognition
and classification problems, the optimal discriminant subspaces obtained by the
proposed method, referred to as GO-LDA (Generalised Optimal LDA), can offer
superior accuracy.
Related papers
- Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Minimally Informed Linear Discriminant Analysis: training an LDA model
with unlabelled data [51.673443581397954]
We show that it is possible to compute the exact projection vector from LDA models based on unlabelled data.
We show that the MILDA projection vector can be computed in a closed form with a computational cost comparable to LDA.
arXiv Detail & Related papers (2023-10-17T09:50:31Z) - A Bi-level Nonlinear Eigenvector Algorithm for Wasserstein Discriminant
Analysis [3.4806267677524896]
Wasserstein discriminant analysis (WDA) is a linear dimensionality reduction method.
WDA can account for both global and local interconnections between data classes.
A bi-level nonlinear eigenvector algorithm (WDA-nepv) is presented.
arXiv Detail & Related papers (2022-11-21T22:40:43Z) - Spectrally-Corrected and Regularized Linear Discriminant Analysis for
Spiked Covariance Model [2.517838307493912]
This paper proposes an improved linear discriminant analysis called spectrally-corrected and regularized LDA (SRLDA)
It is proved that SRLDA has a linear classification global optimal solution under the spiked model assumption.
Experiments on different data sets show that the SRLDA algorithm performs better in classification and dimensionality reduction than currently used tools.
arXiv Detail & Related papers (2022-10-08T00:47:50Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - Regularized Deep Linear Discriminant Analysis [26.08062442399418]
As a non-linear extension of the classic Linear Discriminant Analysis(LDA), Deep Linear Discriminant Analysis(DLDA) replaces the original Categorical Cross Entropy(CCE) loss function.
Regularization method on within-class scatter matrix is proposed to strengthen the discriminative ability of each dimension.
arXiv Detail & Related papers (2021-05-15T03:54:32Z) - Riemannian-based Discriminant Analysis for Feature Extraction and
Classification [2.1485350418225244]
Discriminant analysis is a widely used approach in machine learning to extract low-dimensional features from the high-dimensional data.
Traditional Euclidean-based algorithms for discriminant analysis are easily convergent to a spurious local minima.
We propose a novel method named Riemannian-based Discriminant Analysis (RDA), which transforms the traditional Euclidean-based methods to the Riemannian manifold space.
arXiv Detail & Related papers (2021-01-20T09:13:34Z) - Self-Weighted Robust LDA for Multiclass Classification with Edge Classes [111.5515086563592]
A novel self-weighted robust LDA with l21-norm based between-class distance criterion, called SWRLDA, is proposed for multi-class classification.
The proposed SWRLDA is easy to implement, and converges fast in practice.
arXiv Detail & Related papers (2020-09-24T12:32:55Z) - High-Dimensional Quadratic Discriminant Analysis under Spiked Covariance
Model [101.74172837046382]
We propose a novel quadratic classification technique, the parameters of which are chosen such that the fisher-discriminant ratio is maximized.
Numerical simulations show that the proposed classifier not only outperforms the classical R-QDA for both synthetic and real data but also requires lower computational complexity.
arXiv Detail & Related papers (2020-06-25T12:00:26Z) - Saliency-based Weighted Multi-label Linear Discriminant Analysis [101.12909759844946]
We propose a new variant of Linear Discriminant Analysis (LDA) to solve multi-label classification tasks.
The proposed method is based on a probabilistic model for defining the weights of individual samples.
The Saliency-based weighted Multi-label LDA approach is shown to lead to performance improvements in various multi-label classification problems.
arXiv Detail & Related papers (2020-04-08T19:40:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.