Multi-Objective Optimisation of Multi-Output Neural Trees
- URL: http://arxiv.org/abs/2010.04524v2
- Date: Fri, 18 Feb 2022 17:38:57 GMT
- Title: Multi-Objective Optimisation of Multi-Output Neural Trees
- Authors: Varun Ojha and Giuseppe Nicosia
- Abstract summary: We propose a multi-output neural tree (MONT) algorithm, which is an evolutionary learning algorithm trained by the non-dominated genetic sorting algorithm (NSGA-III)
We use nine benchmark classification learning problems to evaluate the performance of the MONT.
The performance of MONT emerged better over a set of problems tackled in this study compared with a set of well-known classifiers.
- Score: 1.000779758350696
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an algorithm and a new method to tackle the classification
problems. We propose a multi-output neural tree (MONT) algorithm, which is an
evolutionary learning algorithm trained by the non-dominated sorting genetic
algorithm (NSGA)-III. Since evolutionary learning is stochastic, a hypothesis
found in the form of MONT is unique for each run of evolutionary learning,
i.e., each hypothesis (tree) generated bears distinct properties compared to
any other hypothesis both in topological space and parameter-space. This leads
to a challenging optimisation problem where the aim is to minimise the
tree-size and maximise the classification accuracy. Therefore, the
Pareto-optimality concerns were met by hypervolume indicator analysis. We used
nine benchmark classification learning problems to evaluate the performance of
the MONT. As a result of our experiments, we obtained MONTs which are able to
tackle the classification problems with high accuracy. The performance of MONT
emerged better over a set of problems tackled in this study compared with a set
of well-known classifiers: multilayer perceptron, reduced-error pruning tree,
naive Bayes classifier, decision tree, and support vector machine. Moreover,
the performances of three versions of MONT's training using genetic
programming, NSGA-II, and NSGA-III suggest that the NSGA-III gives the best
Pareto-optimal solution.
Related papers
- Compact NSGA-II for Multi-objective Feature Selection [0.24578723416255746]
We define feature selection as a multi-objective binary optimization task with the objectives of maximizing classification accuracy and minimizing the number of selected features.
In order to select optimal features, we have proposed a binary Compact NSGA-II (CNSGA-II) algorithm.
To the best of our knowledge, this is the first compact multi-objective algorithm proposed for feature selection.
arXiv Detail & Related papers (2024-02-20T01:10:12Z) - A Mathematical Runtime Analysis of the Non-dominated Sorting Genetic
Algorithm III (NSGA-III) [9.853329403413701]
The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is the most prominent multi-objective evolutionary algorithm for real-world applications.
We provide the first mathematical runtime analysis of the NSGA-III, a refinement of the NSGA-II aimed at better handling more than two objectives.
arXiv Detail & Related papers (2022-11-15T15:10:36Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Margin Optimal Classification Trees [0.0]
We present a novel mixed-integer formulation for the Optimal Classification Tree ( OCT) problem.
Our model, denoted as Margin Optimal Classification Tree (MARGOT), exploits the generalization capabilities of Support Vector Machines for binary classification.
To enhance the interpretability of our approach, we analyse two alternative versions of MARGOT, which include feature selection constraints inducing local sparsity of the hyperplanes.
arXiv Detail & Related papers (2022-10-19T14:08:56Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Lookback for Learning to Branch [77.32867454769936]
Bipartite Graph Neural Networks (GNNs) have been shown to be an important component of deep learning based Mixed-Integer Linear Program (MILP) solvers.
Recent works have demonstrated the effectiveness of such GNNs in replacing the branching (variable selection) in branch-and-bound (B&B) solvers.
arXiv Detail & Related papers (2022-06-30T02:33:32Z) - Handling Imbalanced Classification Problems With Support Vector Machines
via Evolutionary Bilevel Optimization [73.17488635491262]
Support vector machines (SVMs) are popular learning algorithms to deal with binary classification problems.
This article introduces EBCS-SVM: evolutionary bilevel cost-sensitive SVMs.
arXiv Detail & Related papers (2022-04-21T16:08:44Z) - Understanding Interlocking Dynamics of Cooperative Rationalization [90.6863969334526]
Selective rationalization explains the prediction of complex neural networks by finding a small subset of the input that is sufficient to predict the neural model output.
We reveal a major problem with such cooperative rationalization paradigm -- model interlocking.
We propose a new rationalization framework, called A2R, which introduces a third component into the architecture, a predictor driven by soft attention as opposed to selection.
arXiv Detail & Related papers (2021-10-26T17:39:18Z) - Chaos inspired Particle Swarm Optimization with Levy Flight for Genome
Sequence Assembly [0.0]
In this paper, we propose a new variant of PSO to address the permutation-optimization problem.
PSO is integrated with the Chaos and Levy Flight (A random walk algorithm) to effectively balance the exploration and exploitation capability of the algorithm.
Empirical experiments are conducted to evaluate the performance of the proposed method in comparison to the other variants of PSO proposed in the literature.
arXiv Detail & Related papers (2021-10-20T15:24:27Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - On the Performance of Metaheuristics: A Different Perspective [0.0]
We study some basic evolutionary and swam-intelligence metaheuristics i.e. Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Teaching-Learning-Based Optimization (TLBO) and Cuckoo Optimization algorithm (COA)
A large number of experiments have been conducted on 20 different optimization benchmark functions with different characteristics, and the results reveal to us some fundamental conclusions besides the following ranking order among these metaheuristics.
arXiv Detail & Related papers (2020-01-24T09:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.