Class Binarization to NeuroEvolution for Multiclass Classification
- URL: http://arxiv.org/abs/2308.13876v1
- Date: Sat, 26 Aug 2023 13:26:13 GMT
- Title: Class Binarization to NeuroEvolution for Multiclass Classification
- Authors: Gongjin Lan, Zhenyu Gao, Lingyao Tong, Ting Liu
- Abstract summary: Multiclass classification is a fundamental and challenging task in machine learning.
Decomposing multiclass classification into a set of binary classifications is called class binarization.
We propose a new method that applies Error-Correcting Output Codes (ECOC) to design the class binarization strategies on the neuroevolution for multiclass classification.
- Score: 9.179400849826216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiclass classification is a fundamental and challenging task in machine
learning. The existing techniques of multiclass classification can be
categorized as (i) decomposition into binary (ii) extension from binary and
(iii) hierarchical classification. Decomposing multiclass classification into a
set of binary classifications that can be efficiently solved by using binary
classifiers, called class binarization, which is a popular technique for
multiclass classification. Neuroevolution, a general and powerful technique for
evolving the structure and weights of neural networks, has been successfully
applied to binary classification. In this paper, we apply class binarization
techniques to a neuroevolution algorithm, NeuroEvolution of Augmenting
Topologies (NEAT), that is used to generate neural networks for multiclass
classification. We propose a new method that applies Error-Correcting Output
Codes (ECOC) to design the class binarization strategies on the neuroevolution
for multiclass classification. The ECOC strategies are compared with the class
binarization strategies of One-vs-One and One-vs-All on three well-known
datasets Digit, Satellite, and Ecoli. We analyse their performance from four
aspects of multiclass classification degradation, accuracy, evolutionary
efficiency, and robustness. The results show that the NEAT with ECOC performs
high accuracy with low variance. Specifically, it shows significant benefits in
a flexible number of binary classifiers and strong robustness.
Related papers
- A Multi-Class SWAP-Test Classifier [0.0]
This work presents the first multi-class SWAP-Test classifier inspired by its binary predecessor and the use of label states in recent work.
In contrast to previous work, the number of qubits required, the measurement strategy, and the topology of the circuits used is invariant to the number of classes.
Both analytical results and numerical simulations show that this classifier is not only effective when applied to diverse classification problems but also robust to certain conditions of noise.
arXiv Detail & Related papers (2023-02-06T18:31:43Z) - Bi-directional Feature Reconstruction Network for Fine-Grained Few-Shot
Image Classification [61.411869453639845]
We introduce a bi-reconstruction mechanism that can simultaneously accommodate for inter-class and intra-class variations.
This design effectively helps the model to explore more subtle and discriminative features.
Experimental results on three widely used fine-grained image classification datasets consistently show considerable improvements.
arXiv Detail & Related papers (2022-11-30T16:55:14Z) - Binary and Multinomial Classification through Evolutionary Symbolic
Regression [0.0]
We present three evolutionary symbolic regression-based classification algorithms for binary and multinomial datasets: GPClf, CartesianClf, and ClaSyCo.
Tested over 162 datasets and compared to three state-of-the-art machine learning algorithms -- XGBoost, LightGBM, and a deep neural network -- we find our algorithms to be competitive.
arXiv Detail & Related papers (2022-06-25T18:38:40Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Comparison of machine learning and deep learning techniques in promoter
prediction across diverse species [1.8899300124593648]
We studied methods for vector encoding and promoter classification using genome sequences of three higher eukaryotes viz. yeast, A. thaliana and human.
We found CNN to be superior in classification of promoters from non-promoter sequences (binary classification) as well as species-specific classification of promoter sequences (multiclass classification)
arXiv Detail & Related papers (2021-05-17T08:15:41Z) - Binary Classification from Multiple Unlabeled Datasets via Surrogate Set
Classification [94.55805516167369]
We propose a new approach for binary classification from m U-sets for $mge2$.
Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC)
arXiv Detail & Related papers (2021-02-01T07:36:38Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - Conditional Classification: A Solution for Computational Energy
Reduction [2.182419181054266]
We propose a novel solution to reduce the computational complexity of convolutional neural network models.
Our proposed technique breaks the classification task into two steps: 1) coarse-grain classification, in which the input samples are classified among a set of hyper-classes, 2) fine-grain classification, in which the final labels are predicted among those hyper-classes detected at the first step.
arXiv Detail & Related papers (2020-06-29T03:50:39Z) - Adversarial Multi-Binary Neural Network for Multi-class Classification [19.298875915675502]
We use a multi-task framework to address multi-class classification.
We employ adversarial training to distinguish the class-specific features and the class-agnostic features.
arXiv Detail & Related papers (2020-03-25T02:19:17Z) - Learning Class Regularized Features for Action Recognition [68.90994813947405]
We introduce a novel method named Class Regularization that performs class-based regularization of layer activations.
We show that using Class Regularization blocks in state-of-the-art CNN architectures for action recognition leads to systematic improvement gains of 1.8%, 1.2% and 1.4% on the Kinetics, UCF-101 and HMDB-51 datasets, respectively.
arXiv Detail & Related papers (2020-02-07T07:27:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.