Self-Adjusting Evolutionary Algorithms for Multimodal Optimization
- URL: http://arxiv.org/abs/2004.03266v2
- Date: Tue, 2 Jun 2020 12:01:29 GMT
- Title: Self-Adjusting Evolutionary Algorithms for Multimodal Optimization
- Authors: Amirhossein Rajabi and Carsten Witt
- Abstract summary: Self-adjusting and self-adaptive mechanisms can provably outperform static settings in evolutionary algorithms for binary search spaces.
We suggest a mechanism called stagnation detection that can be added as a module to existing evolutionary algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent theoretical research has shown that self-adjusting and self-adaptive
mechanisms can provably outperform static settings in evolutionary algorithms
for binary search spaces. However, the vast majority of these studies focuses
on unimodal functions which do not require the algorithm to flip several bits
simultaneously to make progress. In fact, existing self-adjusting algorithms
are not designed to detect local optima and do not have any obvious benefit to
cross large Hamming gaps.
We suggest a mechanism called stagnation detection that can be added as a
module to existing evolutionary algorithms (both with and without prior
self-adjusting algorithms). Added to a simple (1+1) EA, we prove an expected
runtime on the well-known Jump benchmark that corresponds to an asymptotically
optimal parameter setting and outperforms other mechanisms for multimodal
optimization like heavy-tailed mutation. We also investigate the module in the
context of a self-adjusting (1+$\lambda$) EA and show that it combines the
previous benefits of this algorithm on unimodal problems with more efficient
multimodal optimization.
To explore the limitations of the approach, we additionally present an
example where both self-adjusting mechanisms, including stagnation detection,
do not help to find a beneficial setting of the mutation rate. Finally, we
investigate our module for stagnation detection experimentally.
Related papers
- Performance Evaluation of Evolutionary Algorithms for Analog Integrated
Circuit Design Optimisation [0.0]
An automated sizing approach for analog circuits is presented in this paper.
A targeted search of the search space has been implemented using a particle generation function and a repair-bounds function.
The algorithms are tuned and modified to converge to a better optimal solution.
arXiv Detail & Related papers (2023-10-19T03:26:36Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - A Simple Evolutionary Algorithm for Multi-modal Multi-objective
Optimization [0.0]
We introduce a steady-state evolutionary algorithm for solving multi-modal, multi-objective optimization problems (MMOPs)
We report its performance on 21 MMOPs from various test suites that are widely used for benchmarking using a low computational budget of 1000 function evaluations.
arXiv Detail & Related papers (2022-01-18T03:31:11Z) - On Constraints in First-Order Optimization: A View from Non-Smooth
Dynamical Systems [99.59934203759754]
We introduce a class of first-order methods for smooth constrained optimization.
Two distinctive features of our approach are that projections or optimizations over the entire feasible set are avoided.
The resulting algorithmic procedure is simple to implement even when constraints are nonlinear.
arXiv Detail & Related papers (2021-07-17T11:45:13Z) - Optimal Static Mutation Strength Distributions for the $(1+\lambda)$
Evolutionary Algorithm on OneMax [1.0965065178451106]
We show that, for large enough population sizes, such optimal distributions may be surprisingly complicated and counter-intuitive.
We show that, for large enough population sizes, such optimal distributions may be surprisingly complicated and counter-intuitive.
arXiv Detail & Related papers (2021-02-09T16:56:25Z) - Adaptive and Universal Algorithms for Variational Inequalities with
Optimal Convergence [29.189409618561964]
We develop new adaptive algorithms for variational inequalities with monotone operators.
Our algorithms automatically adapt to unknown problem parameters.
We show that our algorithms are universal and simultaneously achieve the optimal convergence rates.
arXiv Detail & Related papers (2020-10-15T14:44:26Z) - Optimizing Monotone Chance-Constrained Submodular Functions Using Evolutionary Multi-Objective Algorithms [11.807734722701786]
Here the constraint involves components and the constraint can only be violated with a small probability of alpha.
We show that the algorithm GSEMO obtains the same worst case performance guarantees for monotone submodular functions.
Our experimental results show that the use of evolutionary multi-objective algorithms leads to significant performance improvements.
arXiv Detail & Related papers (2020-06-20T00:17:44Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.