A Novel Surrogate-assisted Evolutionary Algorithm Applied to
Partition-based Ensemble Learning
- URL: http://arxiv.org/abs/2104.08048v1
- Date: Fri, 16 Apr 2021 11:51:18 GMT
- Title: A Novel Surrogate-assisted Evolutionary Algorithm Applied to
Partition-based Ensemble Learning
- Authors: Arkadiy Dushatskiy, Tanja Alderliesten, Peter A. N. Bosman
- Abstract summary: We propose a novel surrogate-assisted Algorithm for solving expensive optimization problems.
We integrate a surrogate model, which is used for fitness value estimation, into a state-of-the-art P3-like variant of the Evolutionary Gene- Evolutionary Optimal Mixing Algorithm.
We test the proposed algorithm on an ensemble learning problem.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel surrogate-assisted Evolutionary Algorithm for solving
expensive combinatorial optimization problems. We integrate a surrogate model,
which is used for fitness value estimation, into a state-of-the-art P3-like
variant of the Gene-Pool Optimal Mixing Algorithm (GOMEA) and adapt the
resulting algorithm for solving non-binary combinatorial problems. We test the
proposed algorithm on an ensemble learning problem. Ensembling several models
is a common Machine Learning technique to achieve better performance. We
consider ensembles of several models trained on disjoint subsets of a dataset.
Finding the best dataset partitioning is naturally a combinatorial non-binary
optimization problem. Fitness function evaluations can be extremely expensive
if complex models, such as Deep Neural Networks, are used as learners in an
ensemble. Therefore, the number of fitness function evaluations is typically
limited, necessitating expensive optimization techniques. In our experiments we
use five classification datasets from the OpenML-CC18 benchmark and
Support-vector Machines as learners in an ensemble. The proposed algorithm
demonstrates better performance than alternative approaches, including Bayesian
optimization algorithms. It manages to find better solutions using just several
thousand fitness function evaluations for an ensemble learning problem with up
to 500 variables.
Related papers
- Effective anytime algorithm for multiobjective combinatorial optimization problems [3.2061579211871383]
A set of efficient solutions that are well-spread in the objective space is preferred to provide the decision maker with a great variety of solutions.
We propose a new exact algorithm for multiobjective optimization combining three novel ideas to enhance the anytime behavior.
arXiv Detail & Related papers (2024-02-06T11:53:44Z) - Dynamic Incremental Optimization for Best Subset Selection [15.8362578568708]
Best subset selection is considered the gold standard for many learning problems.
An efficient subset-dual algorithm is developed based on the primal and dual problem structures.
arXiv Detail & Related papers (2024-02-04T02:26:40Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Enhancing Machine Learning Model Performance with Hyper Parameter
Optimization: A Comparative Study [0.0]
One of the most critical issues in machine learning is the selection of appropriate hyper parameters for training models.
Hyper parameter optimization (HPO) is a popular topic that artificial intelligence studies have focused on recently.
In this study, classical methods, such as grid, random search and Bayesian optimization, and population-based algorithms, such as genetic algorithms and particle swarm optimization, are discussed.
arXiv Detail & Related papers (2023-02-14T10:12:10Z) - Faster Adaptive Momentum-Based Federated Methods for Distributed
Composition Optimization [14.579475552088692]
We propose a class of faster federated composition optimization algorithms (i.e. MFCGD and AdaMFCGD) to solve the non distributed composition problems.
In particular, our adaptive algorithm (i.e., AdaMFCGD) uses a unified adaptive matrix to flexibly incorporate various adaptive learning rates.
arXiv Detail & Related papers (2022-11-03T15:17:04Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - BiAdam: Fast Adaptive Bilevel Optimization Methods [104.96004056928474]
Bilevel optimization has attracted increased interest in machine learning due to its many applications.
We provide a useful analysis framework for both the constrained and unconstrained optimization.
arXiv Detail & Related papers (2021-06-21T20:16:40Z) - Meta Learning Black-Box Population-Based Optimizers [0.0]
We propose the use of meta-learning to infer population-based blackbox generalizations.
We show that the meta-loss function encourages a learned algorithm to alter its search behavior so that it can easily fit into a new context.
arXiv Detail & Related papers (2021-03-05T08:13:25Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.