Multi-Classification using One-versus-One Deep Learning Strategy with
Joint Probability Estimates
- URL: http://arxiv.org/abs/2306.09668v1
- Date: Fri, 16 Jun 2023 07:54:15 GMT
- Title: Multi-Classification using One-versus-One Deep Learning Strategy with
Joint Probability Estimates
- Authors: Anthony Hei-Long Chan, Raymond HonFu Chan, Lingjia Dai
- Abstract summary: The proposed model achieves generally higher classification accuracy than other state-of-the-art models.
Numerical experiments in different applications show that the proposed model achieves generally higher classification accuracy than other state-of-the-art models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The One-versus-One (OvO) strategy is an approach of multi-classification
models which focuses on training binary classifiers between each pair of
classes. While the OvO strategy takes advantage of balanced training data, the
classification accuracy is usually hindered by the voting mechanism to combine
all binary classifiers. In this paper, a novel OvO multi-classification model
incorporating a joint probability measure is proposed under the deep learning
framework. In the proposed model, a two-stage algorithm is developed to
estimate the class probability from the pairwise binary classifiers. Given the
binary classifiers, the pairwise probability estimate is calibrated by a
distance measure on the separating feature hyperplane. From that, the class
probability of the subject is estimated by solving a joint probability-based
distance minimization problem. Numerical experiments in different applications
show that the proposed model achieves generally higher classification accuracy
than other state-of-the-art models.
Related papers
- Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Unified Classification and Rejection: A One-versus-All Framework [47.58109235690227]
We build a unified framework for building open set classifiers for both classification and OOD rejection.
By decomposing the $ K $-class problem into $ K $ one-versus-all (OVA) binary classification tasks, we show that combining the scores of OVA classifiers can give $ (K+1) $-class posterior probabilities.
Experiments on popular OSR and OOD detection datasets demonstrate that the proposed framework, using a single multi-class classifier, yields competitive performance.
arXiv Detail & Related papers (2023-11-22T12:47:12Z) - Anomaly Detection using Ensemble Classification and Evidence Theory [62.997667081978825]
We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
arXiv Detail & Related papers (2022-12-23T00:50:41Z) - Beyond Adult and COMPAS: Fairness in Multi-Class Prediction [8.405162568925405]
We formulate this problem in terms of "projecting" a pre-trained (and potentially unfair) classifier onto the set of models that satisfy target group-fairness requirements.
We provide a parallelizable iterative algorithm for computing the projected classifier and derive both sample complexity and convergence guarantees.
We also evaluate our method at scale on an open dataset with multiple classes, multiple intersectional protected groups, and over 1M samples.
arXiv Detail & Related papers (2022-06-15T20:29:33Z) - Ensemble Classifier Design Tuned to Dataset Characteristics for Network
Intrusion Detection [0.0]
Two new algorithms are proposed to address the class overlap issue in the dataset.
The proposed design is evaluated for both binary and multi-category classification.
arXiv Detail & Related papers (2022-05-08T21:06:42Z) - pRSL: Interpretable Multi-label Stacking by Learning Probabilistic Rules [0.0]
We present the probabilistic rule stacking (pRSL) which uses probabilistic propositional logic rules and belief propagation to combine the predictions of several underlying classifiers.
We derive algorithms for exact and approximate inference and learning, and show that pRSL reaches state-of-the-art performance on various benchmark datasets.
arXiv Detail & Related papers (2021-05-28T14:06:21Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - SetConv: A New Approach for Learning from Imbalanced Data [29.366843553056594]
We propose a set convolution operation and an episodic training strategy to extract a single representative for each class.
We prove that our proposed algorithm is permutation-invariant despite the order of inputs.
arXiv Detail & Related papers (2021-04-03T22:33:30Z) - Binary Classification from Multiple Unlabeled Datasets via Surrogate Set
Classification [94.55805516167369]
We propose a new approach for binary classification from m U-sets for $mge2$.
Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC)
arXiv Detail & Related papers (2021-02-01T07:36:38Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.