Saliency-based Weighted Multi-label Linear Discriminant Analysis
- URL: http://arxiv.org/abs/2004.04221v1
- Date: Wed, 8 Apr 2020 19:40:53 GMT
- Title: Saliency-based Weighted Multi-label Linear Discriminant Analysis
- Authors: Lei Xu, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj
- Abstract summary: We propose a new variant of Linear Discriminant Analysis (LDA) to solve multi-label classification tasks.
The proposed method is based on a probabilistic model for defining the weights of individual samples.
The Saliency-based weighted Multi-label LDA approach is shown to lead to performance improvements in various multi-label classification problems.
- Score: 101.12909759844946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a new variant of Linear Discriminant Analysis (LDA)
to solve multi-label classification tasks. The proposed method is based on a
probabilistic model for defining the weights of individual samples in a
weighted multi-label LDA approach. Linear Discriminant Analysis is a classical
statistical machine learning method, which aims to find a linear data
transformation increasing class discrimination in an optimal discriminant
subspace. Traditional LDA sets assumptions related to Gaussian class
distributions and single-label data annotations. To employ the LDA technique in
multi-label classification problems, we exploit intuitions coming from a
probabilistic interpretation of class saliency to redefine the between-class
and within-class scatter matrices. The saliency-based weights obtained based on
various kinds of affinity encoding prior information are used to reveal the
probability of each instance to be salient for each of its classes in the
multi-label problem at hand. The proposed Saliency-based weighted Multi-label
LDA approach is shown to lead to performance improvements in various
multi-label classification problems.
Related papers
- A New Forward Discriminant Analysis Framework Based On Pillai's Trace and ULDA [6.087464679182875]
This paper introduces a novel forward discriminant analysis framework that integrates Pillai's trace with Uncorrelated Linear Discriminant Analysis (ULDA) to address these challenges.
Through simulations and real-world datasets, the new framework demonstrates effective control of Type I error rates and improved classification accuracy.
arXiv Detail & Related papers (2024-09-05T00:12:15Z) - Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Minimally Informed Linear Discriminant Analysis: training an LDA model
with unlabelled data [51.673443581397954]
We show that it is possible to compute the exact projection vector from LDA models based on unlabelled data.
We show that the MILDA projection vector can be computed in a closed form with a computational cost comparable to LDA.
arXiv Detail & Related papers (2023-10-17T09:50:31Z) - GO-LDA: Generalised Optimal Linear Discriminant Analysis [6.644357197885522]
Linear discriminant analysis has been a useful tool in pattern recognition and data analysis research and practice.
We show that the generalised eigenanalysis solution to multiclass LDA does neither yield orthogonal discriminant directions nor maximise discrimination of projected data along them.
arXiv Detail & Related papers (2023-05-23T23:11:05Z) - Varying Coefficient Linear Discriminant Analysis for Dynamic Data [5.228711636020666]
This paper investigates the varying coefficient LDA model for dynamic data.
By deriving a new discriminant direction function parallel with Bayes' direction, we propose a least-square estimation procedure.
For high-dimensional regime, the corresponding data-driven discriminant rule is more computationally efficient than the existed dynamic linear programming rule.
arXiv Detail & Related papers (2022-03-12T07:32:19Z) - Distributed Sparse Multicategory Discriminant Analysis [1.7223564681760166]
This paper proposes a convex formulation for sparse multicategory linear discriminant analysis and then extend it to the distributed setting when data are stored across multiple sites.
Theoretically, we establish statistical properties ensuring that the distributed sparse multicategory linear discriminant analysis performs as good as the centralized version after a few rounds of communications.
arXiv Detail & Related papers (2022-02-22T14:23:33Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - High-Dimensional Quadratic Discriminant Analysis under Spiked Covariance
Model [101.74172837046382]
We propose a novel quadratic classification technique, the parameters of which are chosen such that the fisher-discriminant ratio is maximized.
Numerical simulations show that the proposed classifier not only outperforms the classical R-QDA for both synthetic and real data but also requires lower computational complexity.
arXiv Detail & Related papers (2020-06-25T12:00:26Z) - Sparse Methods for Automatic Relevance Determination [0.0]
We first review automatic relevance determination (ARD) and analytically demonstrate the need to additional regularization or thresholding to achieve sparse models.
We then discuss two classes of methods, regularization based and thresholding based, which build on ARD to learn parsimonious solutions to linear problems.
arXiv Detail & Related papers (2020-05-18T14:08:49Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.