Time Efficiency in Optimization with a Bayesian-Evolutionary Algorithm
- URL: http://arxiv.org/abs/2005.04166v1
- Date: Mon, 4 May 2020 15:29:22 GMT
- Title: Time Efficiency in Optimization with a Bayesian-Evolutionary Algorithm
- Authors: Gongjin Lan, Jakub M. Tomczak, Diederik M. Roijers, A.E. Eiben
- Abstract summary: We show that not all generate-and-test search algorithms are created equal.
We propose a new algorithm, a combination of Bayesian optimization and an Evolutionary Algorithm, BEA for short, that starts with BO, then transfers knowledge to an EA, and subsequently runs the EA.
The results show that BEA outperforms both BO and the EA in terms of time efficiency, and ultimately leads to better performance on well-known benchmark objective functions with many local optima.
- Score: 13.66850118870667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Not all generate-and-test search algorithms are created equal. Bayesian
Optimization (BO) invests a lot of computation time to generate the candidate
solution that best balances the predicted value and the uncertainty given all
previous data, taking increasingly more time as the number of evaluations
performed grows. Evolutionary Algorithms (EA) on the other hand rely on search
heuristics that typically do not depend on all previous data and can be done in
constant time. Both the BO and EA community typically assess their performance
as a function of the number of evaluations. However, this is unfair once we
start to compare the efficiency of these classes of algorithms, as the overhead
times to generate candidate solutions are significantly different. We suggest
to measure the efficiency of generate-and-test search algorithms as the
expected gain in the objective value per unit of computation time spent. We
observe that the preference of an algorithm to be used can change after a
number of function evaluations. We therefore propose a new algorithm, a
combination of Bayesian optimization and an Evolutionary Algorithm, BEA for
short, that starts with BO, then transfers knowledge to an EA, and subsequently
runs the EA. We compare the BEA with BO and the EA. The results show that BEA
outperforms both BO and the EA in terms of time efficiency, and ultimately
leads to better performance on well-known benchmark objective functions with
many local optima. Moreover, we test the three algorithms on nine test cases of
robot learning problems and here again we find that BEA outperforms the other
algorithms.
Related papers
- Improved discrete particle swarm optimization using Bee Algorithm and multi-parent crossover method (Case study: Allocation problem and benchmark functions) [0.3683202928838613]
This paper proposes the onlooker multi-parent crossover discrete particle swarm optimization (OMPCDPSO)
We performed an independent and intensive neighborhood search using the onlooker bees of the bee algorithm.
The developed algorithm in this research (OMCDPSO) in 36 test functions out of 47 (76.60%) is better than other algorithms.
arXiv Detail & Related papers (2024-03-15T21:08:37Z) - Effective anytime algorithm for multiobjective combinatorial optimization problems [3.2061579211871383]
A set of efficient solutions that are well-spread in the objective space is preferred to provide the decision maker with a great variety of solutions.
We propose a new exact algorithm for multiobjective optimization combining three novel ideas to enhance the anytime behavior.
arXiv Detail & Related papers (2024-02-06T11:53:44Z) - Quality-Diversity Algorithms Can Provably Be Helpful for Optimization [24.694984679399315]
Quality-Diversity (QD) algorithms aim to find a set of high-performing, yet diverse solutions.
This paper tries to shed some light on the optimization ability of QD algorithms via rigorous running time analysis.
arXiv Detail & Related papers (2024-01-19T07:40:24Z) - Comparison of High-Dimensional Bayesian Optimization Algorithms on BBOB [0.40498500266986387]
We compare five state-of-the-art high-dimensional BO algorithms, with vanilla and CMA-ES, at increasing dimensionality, ranging from 10 to 60 variables.
Our results confirm the superiority of BO over CMA-ES for limited evaluation budgets.
arXiv Detail & Related papers (2023-03-02T01:14:15Z) - A Fully Single Loop Algorithm for Bilevel Optimization without Hessian
Inverse [121.54116938140754]
We propose a new Hessian inverse free Fully Single Loop Algorithm for bilevel optimization problems.
We show that our algorithm converges with the rate of $O(epsilon-2)$.
arXiv Detail & Related papers (2021-12-09T02:27:52Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Revisiting Bayesian Optimization in the light of the COCO benchmark [1.4467794332678539]
This article reports a large investigation about the effects on the performance of (Gaussian process based) BO of common and less common design choices.
The code developed for this study makes the new version (v2.1.1) of the R package DiceOptim available on CRAN.
arXiv Detail & Related papers (2021-03-30T19:45:18Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - New Oracle-Efficient Algorithms for Private Synthetic Data Release [52.33506193761153]
We present three new algorithms for constructing differentially private synthetic data.
The algorithms satisfy differential privacy even in the worst case.
Compared to the state-of-the-art method High-Dimensional Matrix Mechanism citeMcKennaMHM18, our algorithms provide better accuracy in the large workload.
arXiv Detail & Related papers (2020-07-10T15:46:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.