EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization
- URL: http://arxiv.org/abs/2007.04681v2
- Date: Mon, 13 Jul 2020 10:27:23 GMT
- Title: EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization
- Authors: Lorenzo Federici, Boris Benedikter, Alessandro Zavoli
- Abstract summary: EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the main characteristics of the evolutionary optimization
code named EOS, Evolutionary Optimization at Sapienza, and its successful
application to challenging, real-world space trajectory optimization problems.
EOS is a global optimization algorithm for constrained and unconstrained
problems of real-valued variables. It implements a number of improvements to
the well-known Differential Evolution (DE) algorithm, namely, a self-adaptation
of the control parameters, an epidemic mechanism, a clustering technique, an
$\varepsilon$-constrained method to deal with nonlinear constraints, and a
synchronous island-model to handle multiple populations in parallel. The
results reported prove that EOSis capable of achieving increased performance
compared to state-of-the-art single-population self-adaptive DE algorithms when
applied to high-dimensional or highly-constrained space trajectory optimization
problems.
Related papers
- Integrating Chaotic Evolutionary and Local Search Techniques in Decision Space for Enhanced Evolutionary Multi-Objective Optimization [1.8130068086063336]
This paper focuses on both Single-Objective Multi-Modal Optimization (SOMMOP) and Multi-Objective Optimization (MOO)
In SOMMOP, we integrate chaotic evolution with niching techniques, as well as Persistence-Based Clustering combined with Gaussian mutation.
For MOO, we extend these methods into a comprehensive framework that incorporates Uncertainty-Based Selection, Adaptive Tuning, and introduces a radius ( R ) concept in deterministic crowding.
arXiv Detail & Related papers (2024-11-12T15:18:48Z) - Modified CMA-ES Algorithm for Multi-Modal Optimization: Incorporating Niching Strategies and Dynamic Adaptation Mechanism [0.03495246564946555]
This study modifies the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm for multi-modal optimization problems.
The enhancements focus on addressing the challenges of multiple global minima, improving the algorithm's ability to maintain diversity and explore complex fitness landscapes.
We incorporate niching strategies and dynamic adaptation mechanisms to refine the algorithm's performance in identifying and optimizing multiple global optima.
arXiv Detail & Related papers (2024-07-01T03:41:39Z) - Evolutionary Alternating Direction Method of Multipliers for Constrained
Multi-Objective Optimization with Unknown Constraints [17.392113376816788]
Constrained multi-objective optimization problems (CMOPs) pervade real-world applications in science, engineering, and design.
We present the first of its kind evolutionary optimization framework, inspired by the principles of the alternating direction method of multipliers that decouples objective and constraint functions.
Our framework tackles CMOPs with unknown constraints by reformulating the original problem into an additive form of two subproblems, each of which is allotted a dedicated evolutionary population.
arXiv Detail & Related papers (2024-01-02T00:38:20Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Advancements in Optimization: Adaptive Differential Evolution with
Diversification Strategy [0.0]
The study employs single-objective optimization in a two-dimensional space and runs ADEDS on each of the benchmark functions with multiple iterations.
ADEDS consistently outperforms standard DE for a variety of optimization challenges, including functions with numerous local optima, plate-shaped, valley-shaped, stretched-shaped, and noisy functions.
arXiv Detail & Related papers (2023-10-02T10:05:41Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Accelerating the Evolutionary Algorithms by Gaussian Process Regression
with $\epsilon$-greedy acquisition function [2.7716102039510564]
We propose a novel method to estimate the elite individual to accelerate the convergence of optimization.
Our proposal has a broad prospect to estimate the elite individual and accelerate the convergence of optimization.
arXiv Detail & Related papers (2022-10-13T07:56:47Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.