Representation-agnostic distance-driven perturbation for optimizing
ill-conditioned problems
- URL: http://arxiv.org/abs/2306.02985v1
- Date: Mon, 5 Jun 2023 15:54:21 GMT
- Title: Representation-agnostic distance-driven perturbation for optimizing
ill-conditioned problems
- Authors: Kirill Antonov, Anna V. Kononova, Thomas B\"ack, Niki van Stein
- Abstract summary: Locality is crucial for efficiently optimising black-box problems with randomized searchs.
We propose two mutation operators to solve such optimization problems more efficiently using the metric.
We experimentally show that both mutation operators speed up the search on most of the functions when applied in considered evolutionary algorithms and random local search.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Locality is a crucial property for efficiently optimising black-box problems
with randomized search heuristics. However, in practical applications, it is
not likely to always find such a genotype encoding of candidate solutions that
this property is upheld with respect to the Hamming distance. At the same time,
it may be possible to use domain-specific knowledge to define a metric with
locality property. We propose two mutation operators to solve such optimization
problems more efficiently using the metric. The first operator assumes prior
knowledge about the distance, the second operator uses the distance as a black
box. Those operators apply an estimation of distribution algorithm to find the
best mutant according to the defined in the paper function, which employs the
given distance. For pseudo-boolean and integer optimization problems, we
experimentally show that both mutation operators speed up the search on most of
the functions when applied in considered evolutionary algorithms and random
local search. Moreover, those operators can be applied in any randomized search
heuristic which uses perturbations. However, our mutation operators increase
wall-clock time and so are helpful in practice when distance is (much) cheaper
to compute than the real objective function.
Related papers
- Digging Deeper: Operator Analysis for Optimizing Nonlinearity of Boolean
Functions [8.382710169577447]
We investigate the effects of genetic operators for bit-string encoding in optimizing nonlinearity.
By observing the range of possible changes an operator can provide, one can use this information to design a more effective combination of genetic operators.
arXiv Detail & Related papers (2023-02-12T10:34:01Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Adaptive and Universal Algorithms for Variational Inequalities with
Optimal Convergence [29.189409618561964]
We develop new adaptive algorithms for variational inequalities with monotone operators.
Our algorithms automatically adapt to unknown problem parameters.
We show that our algorithms are universal and simultaneously achieve the optimal convergence rates.
arXiv Detail & Related papers (2020-10-15T14:44:26Z) - Evolutionary Algorithms with Self-adjusting Asymmetric Mutation [0.0]
We analyze an asymmetric mutation operator that can treat zero- and one-bits differently.
We show improved runtime results on the class of functions OneMax$_a$ describing the number of matching bits.
arXiv Detail & Related papers (2020-06-16T13:16:50Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z) - Fast Mutation in Crossover-based Algorithms [8.34061303235504]
Heavy-tailed mutation operator proposed in Doerr, Le, Makhmara and Nguyen (GECCO)
A heavy-tailed mutation operator proposed in Doerr, Le, Makhmara and Nguyen (GECCO)
An empirical study shows the effectiveness of the fast mutation also on random satisfiable Max-3SAT instances.
arXiv Detail & Related papers (2020-04-14T14:16:42Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z) - Differentiable Top-k Operator with Optimal Transport [135.36099648554054]
The SOFT top-k operator approximates the output of the top-k operation as the solution of an Entropic Optimal Transport (EOT) problem.
We apply the proposed operator to the k-nearest neighbors and beam search algorithms, and demonstrate improved performance.
arXiv Detail & Related papers (2020-02-16T04:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.