Emergence of Structural Bias in Differential Evolution
- URL: http://arxiv.org/abs/2105.04693v1
- Date: Mon, 10 May 2021 22:22:27 GMT
- Title: Emergence of Structural Bias in Differential Evolution
- Authors: Bas van Stein, Fabio Caraffini and Anna V. Kononova
- Abstract summary: Heuristic optimisation algorithms are in high demand due to the overwhelming amount of complex optimisation problems.
Many algorithms show structural bias, by either being attracted to a certain region of the search space or by consistently avoiding regions of the search space.
We analyse the emergence of such structural bias for Differential Evolution (DE) configurations and, specifically, the effect of different mutation, crossover and correction strategies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Heuristic optimisation algorithms are in high demand due to the overwhelming
amount of complex optimisation problems that need to be solved. The complexity
of these problems is well beyond the boundaries of applicability of exact
optimisation algorithms and therefore require modern heuristics to find
feasible solutions quickly. These heuristics and their effects are almost
always evaluated and explained by particular problem instances. In previous
works, it has been shown that many such algorithms show structural bias, by
either being attracted to a certain region of the search space or by
consistently avoiding regions of the search space, on a special test function
designed to ensure uniform 'exploration' of the domain. In this paper, we
analyse the emergence of such structural bias for Differential Evolution (DE)
configurations and, specifically, the effect of different mutation, crossover
and correction strategies. We also analyse the emergence of the structural bias
during the run-time of each algorithm. We conclude with recommendations of
which configurations should be avoided in order to run DE unbiased.
Related papers
- On Probabilistic Embeddings in Optimal Dimension Reduction [1.2085509610251701]
Dimension reduction algorithms are a crucial part of many data science pipelines.
Despite their wide utilization, many non-linear dimension reduction algorithms are poorly understood from a theoretical perspective.
arXiv Detail & Related papers (2024-08-05T12:46:21Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Adaptive Stochastic Optimisation of Nonconvex Composite Objectives [2.1700203922407493]
We propose and analyse a family of generalised composite mirror descent algorithms.
With adaptive step sizes, the proposed algorithms converge without requiring prior knowledge of the problem.
We exploit the low-dimensional structure of the decision sets for high-dimensional problems.
arXiv Detail & Related papers (2022-11-21T18:31:43Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - The importance of being constrained: dealing with infeasible solutions
in Differential Evolution and beyond [0.0]
We argue that results produced by an optimisation algorithm cannot be considered reproducible unless the algorithm fully specifies what should be done with solutions generated outside the domain.
Here, we demonstrate that, at least in algorithms based on Differential Evolution, this choice induces notably different behaviours.
We urge the field of optimisation to formalise and adopt the idea of a new algorithmic component in optimisers.
arXiv Detail & Related papers (2022-03-07T16:49:18Z) - Iterative regularization for low complexity regularizers [18.87017835436693]
Iterative regularization exploits the implicit bias of an optimization algorithm to regularize ill-posed problems.
We propose and study the first iterative regularization procedure able to handle biases described by non smooth and non strongly convex functionals.
arXiv Detail & Related papers (2022-02-01T14:09:00Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - The limits of min-max optimization algorithms: convergence to spurious
non-critical sets [82.74514886461257]
min-max optimization algorithms encounter problems far greater because of the existence of periodic cycles and similar phenomena.
We show that there exist algorithms that do not attract any points of the problem.
We illustrate such challenges in simple to almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost almost
arXiv Detail & Related papers (2020-06-16T10:49:27Z) - A General Framework for Consistent Structured Prediction with Implicit
Loss Embeddings [113.15416137912399]
We propose and analyze a novel theoretical and algorithmic framework for structured prediction.
We study a large class of loss functions that implicitly defines a suitable geometry on the problem.
When dealing with output spaces with infinite cardinality, a suitable implicit formulation of the estimator is shown to be crucial.
arXiv Detail & Related papers (2020-02-13T10:30:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.