Riemannian-based Discriminant Analysis for Feature Extraction and
Classification
- URL: http://arxiv.org/abs/2101.08032v2
- Date: Tue, 26 Jan 2021 07:17:29 GMT
- Title: Riemannian-based Discriminant Analysis for Feature Extraction and
Classification
- Authors: Wanguang Yin, Zhengming Ma, Quanying Liu
- Abstract summary: Discriminant analysis is a widely used approach in machine learning to extract low-dimensional features from the high-dimensional data.
Traditional Euclidean-based algorithms for discriminant analysis are easily convergent to a spurious local minima.
We propose a novel method named Riemannian-based Discriminant Analysis (RDA), which transforms the traditional Euclidean-based methods to the Riemannian manifold space.
- Score: 2.1485350418225244
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Discriminant analysis, as a widely used approach in machine learning to
extract low-dimensional features from the high-dimensional data, applies the
Fisher discriminant criterion to find the orthogonal discriminant projection
subspace. But most of the Euclidean-based algorithms for discriminant analysis
are easily convergent to a spurious local minima and hardly obtain an unique
solution. To address such problem, in this study we propose a novel method
named Riemannian-based Discriminant Analysis (RDA), which transforms the
traditional Euclidean-based methods to the Riemannian manifold space. In RDA,
the second-order geometry of trust-region methods is utilized to learn the
discriminant bases. To validate the efficiency and effectiveness of RDA, we
conduct a variety of experiments on image classification tasks. The numerical
results suggest that RDA can extract statistically significant features and
robustly outperform state-of-the-art algorithms in classification tasks.
Related papers
- Embedding Trajectory for Out-of-Distribution Detection in Mathematical Reasoning [50.84938730450622]
We propose a trajectory-based method TV score, which uses trajectory volatility for OOD detection in mathematical reasoning.
Our method outperforms all traditional algorithms on GLMs under mathematical reasoning scenarios.
Our method can be extended to more applications with high-density features in output spaces, such as multiple-choice questions.
arXiv Detail & Related papers (2024-05-22T22:22:25Z) - Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Stability and Generalization of the Decentralized Stochastic Gradient
Descent Ascent Algorithm [80.94861441583275]
We investigate the complexity of the generalization bound of the decentralized gradient descent (D-SGDA) algorithm.
Our results analyze the impact of different top factors on the generalization of D-SGDA.
We also balance it with the generalization to obtain the optimal convex-concave setting.
arXiv Detail & Related papers (2023-10-31T11:27:01Z) - GO-LDA: Generalised Optimal Linear Discriminant Analysis [6.644357197885522]
Linear discriminant analysis has been a useful tool in pattern recognition and data analysis research and practice.
We show that the generalised eigenanalysis solution to multiclass LDA does neither yield orthogonal discriminant directions nor maximise discrimination of projected data along them.
arXiv Detail & Related papers (2023-05-23T23:11:05Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - Distributed Sparse Multicategory Discriminant Analysis [1.7223564681760166]
This paper proposes a convex formulation for sparse multicategory linear discriminant analysis and then extend it to the distributed setting when data are stored across multiple sites.
Theoretically, we establish statistical properties ensuring that the distributed sparse multicategory linear discriminant analysis performs as good as the centralized version after a few rounds of communications.
arXiv Detail & Related papers (2022-02-22T14:23:33Z) - Regularized Deep Linear Discriminant Analysis [26.08062442399418]
As a non-linear extension of the classic Linear Discriminant Analysis(LDA), Deep Linear Discriminant Analysis(DLDA) replaces the original Categorical Cross Entropy(CCE) loss function.
Regularization method on within-class scatter matrix is proposed to strengthen the discriminative ability of each dimension.
arXiv Detail & Related papers (2021-05-15T03:54:32Z) - High-Dimensional Quadratic Discriminant Analysis under Spiked Covariance
Model [101.74172837046382]
We propose a novel quadratic classification technique, the parameters of which are chosen such that the fisher-discriminant ratio is maximized.
Numerical simulations show that the proposed classifier not only outperforms the classical R-QDA for both synthetic and real data but also requires lower computational complexity.
arXiv Detail & Related papers (2020-06-25T12:00:26Z) - A Compressive Classification Framework for High-Dimensional Data [12.284934135116515]
We propose a compressive classification framework for settings where the data dimensionality is significantly higher than the sample size.
The proposed method, referred to as regularized discriminant analysis (CRDA), is based on linear discriminant analysis.
It has the ability to select significant features by using joint-sparsity promoting hard thresholding in the discriminant rule.
arXiv Detail & Related papers (2020-05-09T06:55:00Z) - Saliency-based Weighted Multi-label Linear Discriminant Analysis [101.12909759844946]
We propose a new variant of Linear Discriminant Analysis (LDA) to solve multi-label classification tasks.
The proposed method is based on a probabilistic model for defining the weights of individual samples.
The Saliency-based weighted Multi-label LDA approach is shown to lead to performance improvements in various multi-label classification problems.
arXiv Detail & Related papers (2020-04-08T19:40:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.