Two-dimensional Bhattacharyya bound linear discriminant analysis with
its applications
- URL: http://arxiv.org/abs/2011.05507v1
- Date: Wed, 11 Nov 2020 01:56:42 GMT
- Title: Two-dimensional Bhattacharyya bound linear discriminant analysis with
its applications
- Authors: Yan-Ru Guo, Yan-Qin Bai, Chun-Na Li, Lan Bai, Yuan-Hai Shao
- Abstract summary: Recently proposed L2-norm linear discriminant analysis criterion via the Bhattacharyya error bound estimation (L2BLDA) is an effective improvement of linear discriminant analysis (LDA) for feature extraction.
We extend L2BLDA to a two-dimensional Bhattacharyya bound linear discriminant analysis (2DBLDA)
The construction of 2DBLDA makes it avoid the small sample size problem while also possess robustness, and can be solved through a simple standard eigenvalue decomposition problem.
- Score: 8.392689203476955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently proposed L2-norm linear discriminant analysis criterion via the
Bhattacharyya error bound estimation (L2BLDA) is an effective improvement of
linear discriminant analysis (LDA) for feature extraction. However, L2BLDA is
only proposed to cope with vector input samples. When facing with
two-dimensional (2D) inputs, such as images, it will lose some useful
information, since it does not consider intrinsic structure of images. In this
paper, we extend L2BLDA to a two-dimensional Bhattacharyya bound linear
discriminant analysis (2DBLDA). 2DBLDA maximizes the matrix-based between-class
distance which is measured by the weighted pairwise distances of class means
and meanwhile minimizes the matrix-based within-class distance. The weighting
constant between the between-class and within-class terms is determined by the
involved data that makes the proposed 2DBLDA adaptive. In addition, the
criterion of 2DBLDA is equivalent to optimizing an upper bound of the
Bhattacharyya error. The construction of 2DBLDA makes it avoid the small sample
size problem while also possess robustness, and can be solved through a simple
standard eigenvalue decomposition problem. The experimental results on image
recognition and face image reconstruction demonstrate the effectiveness of the
proposed methods.
Related papers
- A Convex formulation for linear discriminant analysis [1.3124513975412255]
We present a supervised dimensionality reduction technique called Convex Linear Discriminant Analysis (ConvexLDA)
We show that ConvexLDA outperforms several popular linear discriminant analysis (LDA)-based methods on a range of high-dimensional biological data, image data sets, etc.
arXiv Detail & Related papers (2025-03-17T18:17:49Z) - Dimension reduction via score ratio matching [0.9012198585960441]
We propose a framework, derived from score-matching, to extend gradient-based dimension reduction to problems where gradients are unavailable.
We show that our approach outperforms standard score-matching for problems with low-dimensional structure.
arXiv Detail & Related papers (2024-10-25T22:21:03Z) - Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Minimally Informed Linear Discriminant Analysis: training an LDA model
with unlabelled data [51.673443581397954]
We show that it is possible to compute the exact projection vector from LDA models based on unlabelled data.
We show that the MILDA projection vector can be computed in a closed form with a computational cost comparable to LDA.
arXiv Detail & Related papers (2023-10-17T09:50:31Z) - GO-LDA: Generalised Optimal Linear Discriminant Analysis [6.644357197885522]
Linear discriminant analysis has been a useful tool in pattern recognition and data analysis research and practice.
We show that the generalised eigenanalysis solution to multiclass LDA does neither yield orthogonal discriminant directions nor maximise discrimination of projected data along them.
arXiv Detail & Related papers (2023-05-23T23:11:05Z) - A Bi-level Nonlinear Eigenvector Algorithm for Wasserstein Discriminant
Analysis [3.4806267677524896]
Wasserstein discriminant analysis (WDA) is a linear dimensionality reduction method.
WDA can account for both global and local interconnections between data classes.
A bi-level nonlinear eigenvector algorithm (WDA-nepv) is presented.
arXiv Detail & Related papers (2022-11-21T22:40:43Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Covariance-Free Sparse Bayesian Learning [62.24008859844098]
We introduce a new SBL inference algorithm that avoids explicit inversions of the covariance matrix.
Our method can be up to thousands of times faster than existing baselines.
We showcase how our new algorithm enables SBL to tractably tackle high-dimensional signal recovery problems.
arXiv Detail & Related papers (2021-05-21T16:20:07Z) - A Doubly Regularized Linear Discriminant Analysis Classifier with
Automatic Parameter Selection [24.027886914804775]
Linear discriminant analysis (LDA) based classifiers tend to falter in many practical settings where the training data size is smaller than, or comparable to, the number of features.
We propose a doubly regularized LDA classifier that we denote as R2LDA.
Results obtained from both synthetic and real data demonstrate the consistency and effectiveness of the proposed R2LDA approach.
arXiv Detail & Related papers (2020-04-28T07:09:22Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.