A New Forward Discriminant Analysis Framework Based On Pillai's Trace and ULDA
- URL: http://arxiv.org/abs/2409.03136v1
- Date: Thu, 5 Sep 2024 00:12:15 GMT
- Title: A New Forward Discriminant Analysis Framework Based On Pillai's Trace and ULDA
- Authors: Siyu Wang,
- Abstract summary: This paper introduces a novel forward discriminant analysis framework that integrates Pillai's trace with Uncorrelated Linear Discriminant Analysis (ULDA) to address these challenges.
Through simulations and real-world datasets, the new framework demonstrates effective control of Type I error rates and improved classification accuracy.
- Score: 6.087464679182875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear discriminant analysis (LDA), a traditional classification tool, suffers from limitations such as sensitivity to noise and computational challenges when dealing with non-invertible within-class scatter matrices. Traditional stepwise LDA frameworks, which iteratively select the most informative features, often exacerbate these issues by relying heavily on Wilks' $\Lambda$, potentially causing premature stopping of the selection process. This paper introduces a novel forward discriminant analysis framework that integrates Pillai's trace with Uncorrelated Linear Discriminant Analysis (ULDA) to address these challenges, and offers a unified and stand-alone classifier. Through simulations and real-world datasets, the new framework demonstrates effective control of Type I error rates and improved classification accuracy, particularly in cases involving perfect group separations. The results highlight the potential of this approach as a robust alternative to the traditional stepwise LDA framework.
Related papers
- Directly Handling Missing Data in Linear Discriminant Analysis for Enhancing Classification Accuracy and Interpretability [1.4840867281815378]
We introduce a novel and robust classification method, termed weighted missing Linear Discriminant Analysis (WLDA)
WLDA extends Linear Discriminant Analysis (LDA) to handle datasets with missing values without the need for imputation.
We conduct an in-depth theoretical analysis to establish the properties of WLDA and thoroughly evaluate its explainability.
arXiv Detail & Related papers (2024-06-30T14:21:32Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Last Layer Marginal Likelihood for Invariance Learning [12.00078928875924]
We introduce a new lower bound to the marginal likelihood, which allows us to perform inference for a larger class of likelihood functions.
We work towards bringing this approach to neural networks by using an architecture with a Gaussian process in the last layer.
arXiv Detail & Related papers (2021-06-14T15:40:51Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Sparse Methods for Automatic Relevance Determination [0.0]
We first review automatic relevance determination (ARD) and analytically demonstrate the need to additional regularization or thresholding to achieve sparse models.
We then discuss two classes of methods, regularization based and thresholding based, which build on ARD to learn parsimonious solutions to linear problems.
arXiv Detail & Related papers (2020-05-18T14:08:49Z) - Saliency-based Weighted Multi-label Linear Discriminant Analysis [101.12909759844946]
We propose a new variant of Linear Discriminant Analysis (LDA) to solve multi-label classification tasks.
The proposed method is based on a probabilistic model for defining the weights of individual samples.
The Saliency-based weighted Multi-label LDA approach is shown to lead to performance improvements in various multi-label classification problems.
arXiv Detail & Related papers (2020-04-08T19:40:53Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z) - Multiscale Non-stationary Stochastic Bandits [83.48992319018147]
We propose a novel multiscale changepoint detection method for the non-stationary linear bandit problems, called Multiscale-LinUCB.
Experimental results show that our proposed Multiscale-LinUCB algorithm outperforms other state-of-the-art algorithms in non-stationary contextual environments.
arXiv Detail & Related papers (2020-02-13T00:24:17Z) - An Efficient Framework for Automated Screening of Clinically Significant
Macular Edema [0.41998444721319206]
The present study proposes a new approach to automated screening of Clinically Significant Macular Edema (CSME)
The proposed approach combines a pre-trained deep neural network with meta-heuristic feature selection.
A feature space over-sampling technique is being used to overcome the effects of skewed datasets.
arXiv Detail & Related papers (2020-01-20T07:34:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.