Testing Surrogate-Based Optimization with the Fortified Branin-Hoo
Extended to Four Dimensions
- URL: http://arxiv.org/abs/2107.08035v1
- Date: Fri, 16 Jul 2021 17:56:32 GMT
- Title: Testing Surrogate-Based Optimization with the Fortified Branin-Hoo
Extended to Four Dimensions
- Authors: Charles F. Jekel, Raphael T. Haftka
- Abstract summary: This paper examines the effect of fortifying the Branin-Hoo function on surrogate-based optimization.
It is found that the difference between the ordinary function and the fortified one was much more pronounced for the four-dimensional function.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Some popular functions used to test global optimization algorithms have
multiple local optima, all with the same value, making them all global optima.
It is easy to make them more challenging by fortifying them via adding a
localized bump at the location of one of the optima. In previous work the
authors illustrated this for the Branin-Hoo function and the popular
differential evolution algorithm, showing that the fortified Branin-Hoo
required an order of magnitude more function evaluations. This paper examines
the effect of fortifying the Branin-Hoo function on surrogate-based
optimization, which usually proceeds by adaptive sampling. Two algorithms are
considered. The EGO algorithm, which is based on a Gaussian process (GP) and an
algorithm based on radial basis functions (RBF). EGO is found to be more frugal
in terms of the number of required function evaluations required to identify
the correct basin, but it is expensive to run on a desktop, limiting the number
of times the runs could be repeated to establish sound statistics on the number
of required function evaluations. The RBF algorithm was cheaper to run,
providing more sound statistics on performance. A four-dimensional version of
the Branin-Hoo function was introduced in order to assess the effect of
dimensionality. It was found that the difference between the ordinary function
and the fortified one was much more pronounced for the four-dimensional
function compared to the two dimensional one.
Related papers
- Benchmarking of GPU-optimized Quantum-Inspired Evolutionary Optimization Algorithm using Functional Analysis [0.0]
This article presents a comparative analysis of GPU-parallelized implementations of evolutionary optimization (QIEO)
The results demonstrate that QIEO performs better for these functions than GA.
arXiv Detail & Related papers (2024-12-12T06:47:23Z) - Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - Bayesian Optimization over Permutation Spaces [30.650753803587794]
We propose and evaluate two algorithms for BO over Permutation Spaces (BOPS)
We theoretically analyze the performance of BOPS-T to show that their regret grows sub-linearly.
Our experiments on multiple synthetic and real-world benchmarks show that both BOPS-T and BOPS-H perform better than the state-of-the-art BO algorithm for spaces.
arXiv Detail & Related papers (2021-12-02T08:20:50Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Revisiting Bayesian Optimization in the light of the COCO benchmark [1.4467794332678539]
This article reports a large investigation about the effects on the performance of (Gaussian process based) BO of common and less common design choices.
The code developed for this study makes the new version (v2.1.1) of the R package DiceOptim available on CRAN.
arXiv Detail & Related papers (2021-03-30T19:45:18Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - A Domain-Shrinking based Bayesian Optimization Algorithm with
Order-Optimal Regret Performance [16.0251555430107]
This is the first GP-based algorithm with an order-optimal regret guarantee.
Compared with the prevailing GP-UCB family of algorithms, the proposed algorithm reduces computational complexity by a factor of $O(T2d-1)$.
arXiv Detail & Related papers (2020-10-27T02:15:15Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.