Self-Weighted Robust LDA for Multiclass Classification with Edge Classes
- URL: http://arxiv.org/abs/2009.12362v1
- Date: Thu, 24 Sep 2020 12:32:55 GMT
- Title: Self-Weighted Robust LDA for Multiclass Classification with Edge Classes
- Authors: Caixia Yan, Xiaojun Chang, Minnan Luo, Qinghua Zheng, Xiaoqin Zhang,
Zhihui Li and Feiping Nie
- Abstract summary: A novel self-weighted robust LDA with l21-norm based between-class distance criterion, called SWRLDA, is proposed for multi-class classification.
The proposed SWRLDA is easy to implement, and converges fast in practice.
- Score: 111.5515086563592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear discriminant analysis (LDA) is a popular technique to learn the most
discriminative features for multi-class classification. A vast majority of
existing LDA algorithms are prone to be dominated by the class with very large
deviation from the others, i.e., edge class, which occurs frequently in
multi-class classification. First, the existence of edge classes often makes
the total mean biased in the calculation of between-class scatter matrix.
Second, the exploitation of l2-norm based between-class distance criterion
magnifies the extremely large distance corresponding to edge class. In this
regard, a novel self-weighted robust LDA with l21-norm based pairwise
between-class distance criterion, called SWRLDA, is proposed for multi-class
classification especially with edge classes. SWRLDA can automatically avoid the
optimal mean calculation and simultaneously learn adaptive weights for each
class pair without setting any additional parameter. An efficient re-weighted
algorithm is exploited to derive the global optimum of the challenging l21-norm
maximization problem. The proposed SWRLDA is easy to implement, and converges
fast in practice. Extensive experiments demonstrate that SWRLDA performs
favorably against other compared methods on both synthetic and real-world
datasets, while presenting superior computational efficiency in comparison with
other techniques.
Related papers
- Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Regularized Linear Discriminant Analysis Using a Nonlinear Covariance
Matrix Estimator [11.887333567383239]
Linear discriminant analysis (LDA) is a widely used technique for data classification.
LDA becomes inefficient when the data covariance matrix is ill-conditioned.
Regularized LDA methods have been proposed to cope with such a situation.
arXiv Detail & Related papers (2024-01-31T11:37:14Z) - An Optimal Transport Approach for Computing Adversarial Training Lower
Bounds in Multiclass Classification [3.447848701446988]
A popular paradigm to enforce robustness is adversarial training (AT), however, this introduces many computational and theoretical difficulties.
Recent works have developed a connection between AT in the multiclass classification setting and multimarginal optimal transport (MOT), unlocking a new set of tools to study this problem.
We propose computationally tractable numerical algorithms for computing universal lower bounds on the optimal adversarial risk.
arXiv Detail & Related papers (2024-01-17T13:03:47Z) - Characterizing the Optimal 0-1 Loss for Multi-class Classification with
a Test-time Attacker [57.49330031751386]
We find achievable information-theoretic lower bounds on loss in the presence of a test-time attacker for multi-class classifiers on any discrete dataset.
We provide a general framework for finding the optimal 0-1 loss that revolves around the construction of a conflict hypergraph from the data and adversarial constraints.
arXiv Detail & Related papers (2023-02-21T15:17:13Z) - Divide-and-Conquer Hard-thresholding Rules in High-dimensional
Imbalanced Classification [1.0312968200748118]
We study the impact of imbalance class sizes on the linear discriminant analysis (LDA) in high dimensions.
We show that due to data scarcity in one class, referred to as the minority class, the LDA ignores the minority class yielding a maximum misclassification rate.
We propose a new construction of a hard-conquering rule based on a divide-and-conquer technique that reduces the large difference between the misclassification rates.
arXiv Detail & Related papers (2021-11-05T07:44:28Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - High-Dimensional Quadratic Discriminant Analysis under Spiked Covariance
Model [101.74172837046382]
We propose a novel quadratic classification technique, the parameters of which are chosen such that the fisher-discriminant ratio is maximized.
Numerical simulations show that the proposed classifier not only outperforms the classical R-QDA for both synthetic and real data but also requires lower computational complexity.
arXiv Detail & Related papers (2020-06-25T12:00:26Z) - Saliency-based Weighted Multi-label Linear Discriminant Analysis [101.12909759844946]
We propose a new variant of Linear Discriminant Analysis (LDA) to solve multi-label classification tasks.
The proposed method is based on a probabilistic model for defining the weights of individual samples.
The Saliency-based weighted Multi-label LDA approach is shown to lead to performance improvements in various multi-label classification problems.
arXiv Detail & Related papers (2020-04-08T19:40:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.