Binary and Multinomial Classification through Evolutionary Symbolic
Regression
- URL: http://arxiv.org/abs/2206.12706v1
- Date: Sat, 25 Jun 2022 18:38:40 GMT
- Title: Binary and Multinomial Classification through Evolutionary Symbolic
Regression
- Authors: Moshe Sipper
- Abstract summary: We present three evolutionary symbolic regression-based classification algorithms for binary and multinomial datasets: GPClf, CartesianClf, and ClaSyCo.
Tested over 162 datasets and compared to three state-of-the-art machine learning algorithms -- XGBoost, LightGBM, and a deep neural network -- we find our algorithms to be competitive.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present three evolutionary symbolic regression-based classification
algorithms for binary and multinomial datasets: GPLearnClf, CartesianClf, and
ClaSyCo. Tested over 162 datasets and compared to three state-of-the-art
machine learning algorithms -- XGBoost, LightGBM, and a deep neural network --
we find our algorithms to be competitive. Further, we demonstrate how to find
the best method for one's dataset automatically, through the use of a
state-of-the-art hyperparameter optimizer.
Related papers
- Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - Class Binarization to NeuroEvolution for Multiclass Classification [9.179400849826216]
Multiclass classification is a fundamental and challenging task in machine learning.
Decomposing multiclass classification into a set of binary classifications is called class binarization.
We propose a new method that applies Error-Correcting Output Codes (ECOC) to design the class binarization strategies on the neuroevolution for multiclass classification.
arXiv Detail & Related papers (2023-08-26T13:26:13Z) - Continuous Cartesian Genetic Programming based representation for
Multi-Objective Neural Architecture Search [12.545742558041583]
We propose a novel approach for designing less complex yet highly effective convolutional neural networks (CNNs)
Our approach combines real-based and block-chained CNNs representations based on cartesian genetic programming (CGP) for neural architecture search (NAS)
Two variants are introduced that differ in the granularity of the search space they consider.
arXiv Detail & Related papers (2023-06-05T07:32:47Z) - HKNAS: Classification of Hyperspectral Imagery Based on Hyper Kernel
Neural Architecture Search [104.45426861115972]
We propose to directly generate structural parameters by utilizing the specifically designed hyper kernels.
We obtain three kinds of networks to separately conduct pixel-level or image-level classifications with 1-D or 3-D convolutions.
A series of experiments on six public datasets demonstrate that the proposed methods achieve state-of-the-art results.
arXiv Detail & Related papers (2023-04-23T17:27:40Z) - An algorithmic framework for the optimization of deep neural networks architectures and hyperparameters [0.23301643766310373]
We propose an algorithmic framework to automatically generate efficient deep neural networks.
The framework is based on evolving directed acyclic graphs (DAGs)
It allows mixtures of different classical operations: convolutions, recurrences and dense layers, but also more newfangled operations such as self-attention.
arXiv Detail & Related papers (2023-02-27T08:00:33Z) - Riemannian classification of EEG signals with missing values [67.90148548467762]
This paper proposes two strategies to handle missing data for the classification of electroencephalograms.
The first approach estimates the covariance from imputed data with the $k$-nearest neighbors algorithm; the second relies on the observed data by leveraging the observed-data likelihood within an expectation-maximization algorithm.
As results show, the proposed strategies perform better than the classification based on observed data and allow to keep a high accuracy even when the missing data ratio increases.
arXiv Detail & Related papers (2021-10-19T14:24:50Z) - Overhead-MNIST: Machine Learning Baselines for Image Classification [0.0]
Twenty-three machine learning algorithms were trained then scored to establish baseline comparison metrics.
The Overhead-MNIST dataset is a collection of satellite images similar in style to the ubiquitous MNIST hand-written digits.
We present results for the overall best performing algorithm as a baseline for edge deployability and future performance improvement.
arXiv Detail & Related papers (2021-07-01T13:30:39Z) - An in-depth comparison of methods handling mixed-attribute data for
general fuzzy min-max neural network [9.061408029414455]
We will compare and assess three main methods of handling datasets with mixed features.
The experimental results showed that the target and James-Stein are appropriate categorical encoding methods for learning algorithms of GFMM models.
The combination of GFMM neural networks and decision trees is a flexible way to enhance the classification performance of GFMM models on datasets with the mixed features.
arXiv Detail & Related papers (2020-09-01T05:12:22Z) - Unsupervised Deep Cross-modality Spectral Hashing [65.3842441716661]
The framework is a two-step hashing approach which decouples the optimization into binary optimization and hashing function learning.
We propose a novel spectral embedding-based algorithm to simultaneously learn single-modality and binary cross-modality representations.
We leverage the powerful CNN for images and propose a CNN-based deep architecture to learn text modality.
arXiv Detail & Related papers (2020-08-01T09:20:11Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.