A new class of generative classifiers based on staged tree models
- URL: http://arxiv.org/abs/2012.13798v1
- Date: Sat, 26 Dec 2020 19:30:35 GMT
- Title: A new class of generative classifiers based on staged tree models
- Authors: Federico Carli, Manuele Leonelli, Gherardo Varando
- Abstract summary: Generative models for classification use the joint probability distribution of the class variable and the features to construct a decision rule.
Here we introduce a new class of generative classifiers, called staged tree classifiers, which formally account for context-specific independence.
An applied analysis to predict the fate of the passengers of the Titanic highlights the insights that the new class of generative classifiers can give.
- Score: 2.66269503676104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative models for classification use the joint probability distribution
of the class variable and the features to construct a decision rule. Among
generative models, Bayesian networks and naive Bayes classifiers are the most
commonly used and provide a clear graphical representation of the relationship
among all variables. However, these have the disadvantage of highly restricting
the type of relationships that could exist, by not allowing for
context-specific independences. Here we introduce a new class of generative
classifiers, called staged tree classifiers, which formally account for
context-specific independence. They are constructed by a partitioning of the
vertices of an event tree from which conditional independence can be formally
read. The naive staged tree classifier is also defined, which extends the
classic naive Bayes classifier whilst retaining the same complexity. An
extensive simulation study shows that the classification accuracy of staged
tree classifiers is competitive with those of state-of-the-art classifiers. An
applied analysis to predict the fate of the passengers of the Titanic
highlights the insights that the new class of generative classifiers can give.
Related papers
- Context-Specific Refinements of Bayesian Network Classifiers [1.9136291802656262]
We study the relationship between our novel classes of classifiers and Bayesian networks.
We introduce and implement data-driven learning routines for our models.
The study demonstrates that models embedding asymmetric information can enhance classification accuracy.
arXiv Detail & Related papers (2024-05-28T15:50:50Z) - Generative Multi-modal Models are Good Class-Incremental Learners [51.5648732517187]
We propose a novel generative multi-modal model (GMM) framework for class-incremental learning.
Our approach directly generates labels for images using an adapted generative model.
Under the Few-shot CIL setting, we have improved by at least 14% accuracy over all the current state-of-the-art methods with significantly less forgetting.
arXiv Detail & Related papers (2024-03-27T09:21:07Z) - Mitigating Word Bias in Zero-shot Prompt-based Classifiers [55.60306377044225]
We show that matching class priors correlates strongly with the oracle upper bound performance.
We also demonstrate large consistent performance gains for prompt settings over a range of NLP tasks.
arXiv Detail & Related papers (2023-09-10T10:57:41Z) - Optimal partition of feature using Bayesian classifier [0.0]
In Naive Bayes, certain features are called independent features as they have no conditional correlation or dependency when predicting a classification.
We propose a novel technique called the Comonotone-Independence (CIBer) which is able to overcome the challenges posed by the Naive Bayes method.
arXiv Detail & Related papers (2023-04-27T21:19:06Z) - Anomaly Detection using Ensemble Classification and Evidence Theory [62.997667081978825]
We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
arXiv Detail & Related papers (2022-12-23T00:50:41Z) - Structure of Classifier Boundaries: Case Study for a Naive Bayes
Classifier [1.1485218363676564]
We show that the boundary is both large and complicated in structure.
We create a new measure of uncertainty, called Neighbor Similarity, that compares the result for a point to the distribution of results for its neighbors.
arXiv Detail & Related papers (2022-12-08T16:23:42Z) - Distance Based Image Classification: A solution to generative
classification's conundrum? [70.43638559782597]
We argue that discriminative boundaries are counter-intuitive as they define semantics by what-they-are-not.
We propose a new generative model in which semantic factors are accommodated by shell theory's hierarchical generative process.
We use the model to develop a classification scheme which suppresses the impact of noise while preserving semantic cues.
arXiv Detail & Related papers (2022-10-04T03:35:13Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - A Systematic Evaluation: Fine-Grained CNN vs. Traditional CNN
Classifiers [54.996358399108566]
We investigate the performance of the landmark general CNN classifiers, which presented top-notch results on large scale classification datasets.
We compare it against state-of-the-art fine-grained classifiers.
We show an extensive evaluation on six datasets to determine whether the fine-grained classifier is able to elevate the baseline in their experiments.
arXiv Detail & Related papers (2020-03-24T23:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.