The importance of being constrained: dealing with infeasible solutions
in Differential Evolution and beyond
- URL: http://arxiv.org/abs/2203.03512v1
- Date: Mon, 7 Mar 2022 16:49:18 GMT
- Title: The importance of being constrained: dealing with infeasible solutions
in Differential Evolution and beyond
- Authors: Anna V. Kononova, Diederick Vermetten, Fabio Caraffini, Madalina-A.
Mitran and Daniela Zaharie
- Abstract summary: We argue that results produced by an optimisation algorithm cannot be considered reproducible unless the algorithm fully specifies what should be done with solutions generated outside the domain.
Here, we demonstrate that, at least in algorithms based on Differential Evolution, this choice induces notably different behaviours.
We urge the field of optimisation to formalise and adopt the idea of a new algorithmic component in optimisers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We argue that results produced by a heuristic optimisation algorithm cannot
be considered reproducible unless the algorithm fully specifies what should be
done with solutions generated outside the domain, even in the case of simple
box constraints. Currently, in the field of heuristic optimisation, such
specification is rarely mentioned or investigated due to the assumed triviality
or insignificance of this question. Here, we demonstrate that, at least in
algorithms based on Differential Evolution, this choice induces notably
different behaviours - in terms of performance, disruptiveness and population
diversity. This is shown theoretically (where possible) for standard
Differential Evolution in the absence of selection pressure and experimentally
for the standard and state-of-the-art Differential Evolution variants on
special test function $f_0$ and BBOB benchmarking suite, respectively.
Moreover, we demonstrate that the importance of this choice quickly grows with
problem's dimensionality. Different Evolution is not at all special in this
regard - there is no reason to presume that other heuristic optimisers are not
equally affected by the aforementioned algorithmic choice. Thus, we urge the
field of heuristic optimisation to formalise and adopt the idea of a new
algorithmic component in heuristic optimisers, which we call here a strategy of
dealing with infeasible solutions. This component needs to be consistently (a)
specified in algorithmic descriptions to guarantee reproducibility of results,
(b) studied to better understand its impact on algorithm's performance in a
wider sense and (c) included in the (automatic) algorithmic design. All of
these should be done even for problems with box constraints.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - On Probabilistic Embeddings in Optimal Dimension Reduction [1.2085509610251701]
Dimension reduction algorithms are a crucial part of many data science pipelines.
Despite their wide utilization, many non-linear dimension reduction algorithms are poorly understood from a theoretical perspective.
arXiv Detail & Related papers (2024-08-05T12:46:21Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Do We Really Need to Use Constraint Violation in Constrained
Evolutionary Multi-Objective Optimization? [13.833668582211876]
Constraint violation has been a building block to design evolutionary multi-objective optimization algorithms.
This paper develops the corresponding variants that replace the constraint violation by a crisp value.
From our experiments on both synthetic and real-world benchmark test problems, we find that the performance of the selected algorithms have not been significantly influenced.
arXiv Detail & Related papers (2022-05-28T06:29:07Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Emergence of Structural Bias in Differential Evolution [0.0]
Heuristic optimisation algorithms are in high demand due to the overwhelming amount of complex optimisation problems.
Many algorithms show structural bias, by either being attracted to a certain region of the search space or by consistently avoiding regions of the search space.
We analyse the emergence of such structural bias for Differential Evolution (DE) configurations and, specifically, the effect of different mutation, crossover and correction strategies.
arXiv Detail & Related papers (2021-05-10T22:22:27Z) - Devolutionary genetic algorithms with application to the minimum
labeling Steiner tree problem [0.0]
This paper characterizes and discusses devolutionary genetic algorithms and evaluates their performances in solving the minimum labeling Steiner tree (MLST) problem.
We define devolutionary algorithms as the process of reaching a feasible solution by devolving a population of super-optimal unfeasible solutions over time.
We show how classical evolutionary concepts, such as crossing, mutation and fitness can be adapted to aim at reaching an optimal or close-to-optimal solution.
arXiv Detail & Related papers (2020-04-18T13:27:28Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z) - On the Performance of Metaheuristics: A Different Perspective [0.0]
We study some basic evolutionary and swam-intelligence metaheuristics i.e. Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Teaching-Learning-Based Optimization (TLBO) and Cuckoo Optimization algorithm (COA)
A large number of experiments have been conducted on 20 different optimization benchmark functions with different characteristics, and the results reveal to us some fundamental conclusions besides the following ranking order among these metaheuristics.
arXiv Detail & Related papers (2020-01-24T09:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.