Parameterless Gene-pool Optimal Mixing Evolutionary Algorithms
- URL: http://arxiv.org/abs/2109.05259v1
- Date: Sat, 11 Sep 2021 11:35:14 GMT
- Title: Parameterless Gene-pool Optimal Mixing Evolutionary Algorithms
- Authors: Arkadiy Dushatskiy, Marco Virgolin, Anton Bouter, Dirk Thierens and
Peter A. N. Bosman
- Abstract summary: We present the latest version of, and propose substantial enhancements to, the Gene-pool Optimal Mixing Evoutionary Algorithm (GOMEA)
We show that GOMEA and CGOMEA significantly outperform the original GOMEA and DSMGA-II on most problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When it comes to solving optimization problems with evolutionary algorithms
(EAs) in a reliable and scalable manner, detecting and exploiting linkage
information, i.e., dependencies between variables, can be key. In this article,
we present the latest version of, and propose substantial enhancements to, the
Gene-pool Optimal Mixing Evoutionary Algorithm (GOMEA): an EA explicitly
designed to estimate and exploit linkage information. We begin by performing a
large-scale search over several GOMEA design choices, to understand what
matters most and obtain a generally best-performing version of the algorithm.
Next, we introduce a novel version of GOMEA, called CGOMEA, where linkage-based
variation is further improved by filtering solution mating based on conditional
dependencies. We compare our latest version of GOMEA, the newly introduced
CGOMEA, and another contending linkage-aware EA DSMGA-II in an extensive
experimental evaluation, involving a benchmark set of 9 black-box problems that
can only be solved efficiently if their inherent dependency structure is
unveiled and exploited. Finally, in an attempt to make EAs more usable and
resilient to parameter choices, we investigate the performance of different
automatic population management schemes for GOMEA and CGOMEA, de facto making
the EAs parameterless. Our results show that GOMEA and CGOMEA significantly
outperform the original GOMEA and DSMGA-II on most problems, setting a new
state of the art for the field.
Related papers
- Analyzing the Runtime of the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) on the Concatenated Trap Function [2.038038953957366]
GOMEA is an evolutionary algorithm that leverages linkage learning to efficiently exploit problem structure.
We show that GOMEA can solve the problem in $O(m32k)$ with high probability, where $m$ is the number of subfunctions and $k$ is the subfunction length.
This is a significant speedup compared to the (1+1) Evolutionary EA, which requires $O(ln(m)(mk)k)$ expected evaluations.
arXiv Detail & Related papers (2024-07-11T09:37:21Z) - Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers [108.72225067368592]
We propose a novel perspective to investigate the design of large language models (LLMs)-based prompts.
We identify two pivotal factors in model parameter learning: update direction and update method.
In particular, we borrow the theoretical framework and learning methods from gradient-based optimization to design improved strategies.
arXiv Detail & Related papers (2024-02-27T15:05:32Z) - Fitness-based Linkage Learning and Maximum-Clique Conditional Linkage
Modelling for Gray-box Optimization with RV-GOMEA [0.552480439325792]
In this work, we combine fitness-based linkage learning and conditional linkage modelling in RV-GOMEA.
We find that the new RV-GOMEA not only performs best on most problems, also the overhead of learning the conditional linkage models during optimization is often negligible.
arXiv Detail & Related papers (2024-02-16T15:28:27Z) - A Joint Python/C++ Library for Efficient yet Accessible Black-Box and
Gray-Box Optimization with GOMEA [0.0]
We introduce the GOMEA library, making existing GOMEA code in C++ accessible through Python.
We show its performance in both Gray-Box Optimization (GBO) and Black-Box Optimization (BBO)
arXiv Detail & Related papers (2023-05-10T15:28:31Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Applying Autonomous Hybrid Agent-based Computing to Difficult
Optimization Problems [56.821213236215634]
This paper focuses on a proposed hybrid version of the EMAS.
It covers selection and introduction of a number of hybrid operators and defining rules for starting the hybrid steps of the main algorithm.
Those hybrid steps leverage existing, well-known and proven to be efficient metaheuristics, and integrate their results into the main algorithm.
arXiv Detail & Related papers (2022-10-24T13:28:35Z) - Accelerating the Evolutionary Algorithms by Gaussian Process Regression
with $\epsilon$-greedy acquisition function [2.7716102039510564]
We propose a novel method to estimate the elite individual to accelerate the convergence of optimization.
Our proposal has a broad prospect to estimate the elite individual and accelerate the convergence of optimization.
arXiv Detail & Related papers (2022-10-13T07:56:47Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - GEO: Enhancing Combinatorial Optimization with Classical and Quantum
Generative Models [62.997667081978825]
We introduce a new framework that leverages machine learning models known as generative models to solve optimization problems.
We focus on a quantum-inspired version of GEO relying on tensor-network Born machines.
We show its superior performance when the goal is to find the best minimum given a fixed budget for the number of function calls.
arXiv Detail & Related papers (2021-01-15T18:18:38Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.