FEMDA: a unified framework for discriminant analysis
- URL: http://arxiv.org/abs/2311.07518v1
- Date: Mon, 13 Nov 2023 17:59:37 GMT
- Title: FEMDA: a unified framework for discriminant analysis
- Authors: Pierre Houdouin, Matthieu Jonckheere, Frederic Pascal
- Abstract summary: We present a novel approach to deal with non-Gaussian datasets.
The model considered is an arbitraryly Symmetrical (ES) distribution per cluster with its own arbitrary scale parameter.
By deriving a new decision rule, we demonstrate that maximum-likelihood parameter estimation and classification are simple, efficient, and robust compared to state-of-the-art methods.
- Score: 4.6040036610482655
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Although linear and quadratic discriminant analysis are widely recognized
classical methods, they can encounter significant challenges when dealing with
non-Gaussian distributions or contaminated datasets. This is primarily due to
their reliance on the Gaussian assumption, which lacks robustness. We first
explain and review the classical methods to address this limitation and then
present a novel approach that overcomes these issues. In this new approach, the
model considered is an arbitrary Elliptically Symmetrical (ES) distribution per
cluster with its own arbitrary scale parameter. This flexible model allows for
potentially diverse and independent samples that may not follow identical
distributions. By deriving a new decision rule, we demonstrate that
maximum-likelihood parameter estimation and classification are simple,
efficient, and robust compared to state-of-the-art methods.
Related papers
- Cycles of Thought: Measuring LLM Confidence through Stable Explanations [53.15438489398938]
Large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, but their overconfidence in incorrect responses is still a well-documented failure mode.
We propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer.
arXiv Detail & Related papers (2024-06-05T16:35:30Z) - Anomaly Detection Under Uncertainty Using Distributionally Robust
Optimization Approach [0.9217021281095907]
Anomaly detection is defined as the problem of finding data points that do not follow the patterns of the majority.
The one-class Support Vector Machines (SVM) method aims to find a decision boundary to distinguish between normal data points and anomalies.
A distributionally robust chance-constrained model is proposed in which the probability of misclassification is low.
arXiv Detail & Related papers (2023-12-03T06:13:22Z) - Aggregation Weighting of Federated Learning via Generalization Bound
Estimation [65.8630966842025]
Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions.
We replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
arXiv Detail & Related papers (2023-11-10T08:50:28Z) - FEMDA: Une m\'ethode de classification robuste et flexible [0.8594140167290096]
This paper studies robustness to scale changes in the data of a new discriminant analysis technique.
The new decision rule derived is simple, fast, and robust to scale changes in the data compared to other state-of-the-art method.
arXiv Detail & Related papers (2023-07-04T23:15:31Z) - Robust classification with flexible discriminant analysis in
heterogeneous data [0.7646713951724009]
This paper presents a new robust discriminant analysis where each data point is drawn by its own arbitrary scale parameter.
It is shown that maximum-likelihood parameter estimation and classification are very simple, fast and robust compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-01-09T09:22:56Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Sparse Methods for Automatic Relevance Determination [0.0]
We first review automatic relevance determination (ARD) and analytically demonstrate the need to additional regularization or thresholding to achieve sparse models.
We then discuss two classes of methods, regularization based and thresholding based, which build on ARD to learn parsimonious solutions to linear problems.
arXiv Detail & Related papers (2020-05-18T14:08:49Z) - Saliency-based Weighted Multi-label Linear Discriminant Analysis [101.12909759844946]
We propose a new variant of Linear Discriminant Analysis (LDA) to solve multi-label classification tasks.
The proposed method is based on a probabilistic model for defining the weights of individual samples.
The Saliency-based weighted Multi-label LDA approach is shown to lead to performance improvements in various multi-label classification problems.
arXiv Detail & Related papers (2020-04-08T19:40:53Z) - On Contrastive Learning for Likelihood-free Inference [20.49671736540948]
Likelihood-free methods perform parameter inference in simulator models where evaluating the likelihood is intractable.
One class of methods for this likelihood-free problem uses a classifier to distinguish between pairs of parameter-observation samples.
Another popular class of methods fits a conditional distribution to the parameter posterior directly, and a particular recent variant allows for the use of flexible neural density estimators.
arXiv Detail & Related papers (2020-02-10T13:14:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.