Soft Genetic Programming Binary Classifiers
- URL: http://arxiv.org/abs/2101.08742v1
- Date: Thu, 21 Jan 2021 17:43:11 GMT
- Title: Soft Genetic Programming Binary Classifiers
- Authors: Ivan Gridin
- Abstract summary: "Soft" genetic programming (SGP) has been developed, which allows the logical operator tree to be more flexible and find dependencies in datasets.
This article discusses a method for constructing binary classifiers using the SGP technique.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The study of the classifier's design and it's usage is one of the most
important machine learning areas. With the development of automatic machine
learning methods, various approaches are used to build a robust classifier
model. Due to some difficult implementation and customization complexity,
genetic programming (GP) methods are not often used to construct classifiers.
GP classifiers have several limitations and disadvantages. However, the concept
of "soft" genetic programming (SGP) has been developed, which allows the
logical operator tree to be more flexible and find dependencies in datasets,
which gives promising results in most cases. This article discusses a method
for constructing binary classifiers using the SGP technique. The test results
are presented. Source code - https://github.com/survexman/sgp_classifier.
Related papers
- Genetic Instruct: Scaling up Synthetic Generation of Coding Instructions for Large Language Models [54.51932175059004]
We introduce a scalable method for generating synthetic instructions to enhance the code generation capability of Large Language Models.
The proposed algorithm, Genetic-Instruct, mimics evolutionary processes, utilizing self-instruction to create numerous synthetic samples from a limited number of seeds.
arXiv Detail & Related papers (2024-07-29T20:42:59Z) - Leveraging Generative AI: Improving Software Metadata Classification
with Generated Code-Comment Pairs [0.0]
In software development, code comments play a crucial role in enhancing code comprehension and collaboration.
This research paper addresses the challenge of objectively classifying code comments as "Useful" or "Not Useful"
We propose a novel solution that harnesses contextualized embeddings, particularly BERT, to automate this classification process.
arXiv Detail & Related papers (2023-10-14T12:09:43Z) - Dynamic Perceiver for Efficient Visual Recognition [87.08210214417309]
We propose Dynamic Perceiver (Dyn-Perceiver) to decouple the feature extraction procedure and the early classification task.
A feature branch serves to extract image features, while a classification branch processes a latent code assigned for classification tasks.
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
arXiv Detail & Related papers (2023-06-20T03:00:22Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Explaining Classifiers Trained on Raw Hierarchical Multiple-Instance
Data [0.0]
A number of data sources have the natural form of structured data interchange formats (e.g. Multiple security logs in/XML format)
Existing methods, such as in Hierarchical Instance Learning (HMIL) allow learning from such data in their raw form.
By treating these models as sub-set selections problems, we demonstrate how interpretable explanations, with favourable properties, can be generated using computationally efficient algorithms.
We compare to an explanation technique adopted from graph neural networks showing an order of magnitude speed-up and higher-quality explanations.
arXiv Detail & Related papers (2022-08-04T14:48:37Z) - Deep ensembles in bioimage segmentation [74.01883650587321]
In this work, we propose an ensemble of convolutional neural networks (CNNs)
In ensemble methods, many different models are trained and then used for classification, the ensemble aggregates the outputs of the single classifiers.
The proposed ensemble is implemented by combining different backbone networks using the DeepLabV3+ and HarDNet environment.
arXiv Detail & Related papers (2021-12-24T05:54:21Z) - mlf-core: a framework for deterministic machine learning [0.08795040582681389]
Major machine learning libraries default to the usage of non-deterministic algorithms based on atomic operations.
To overcome this shortcoming, various machine learning libraries released deterministic counterparts to the non-deterministic algorithms.
We developed a new software solution, the mlf-core, which aids machine learning projects to meet and keep these requirements.
arXiv Detail & Related papers (2021-04-15T17:58:03Z) - GP-Tree: A Gaussian Process Classifier for Few-Shot Incremental Learning [23.83961717568121]
GP-Tree is a novel method for multi-class classification with Gaussian processes and deep kernel learning.
We develop a tree-based hierarchical model in which each internal node fits a GP to the data.
Our method scales well with both the number of classes and data size.
arXiv Detail & Related papers (2021-02-15T22:16:27Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Searching towards Class-Aware Generators for Conditional Generative
Adversarial Networks [132.29772160843825]
Conditional Generative Adversarial Networks (cGAN) were designed to generate images based on the provided conditions.
Existing methods have used the same generating architecture for all classes.
This paper presents a novel idea that adopts NAS to find a distinct architecture for each class.
arXiv Detail & Related papers (2020-06-25T07:05:28Z) - Applying Genetic Programming to Improve Interpretability in Machine
Learning Models [0.3908287552267639]
We propose a Genetic Programming (GP) based approach, named Genetic Programming Explainer (GPX)
The method generates a noise set located in the neighborhood of the point of interest, whose prediction should be explained, and fits a local explanation model for the analyzed sample.
Our results indicate that the GPX is able to produce more accurate understanding of complex models than the state of the art.
arXiv Detail & Related papers (2020-05-18T16:09:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.