Non-smooth Bayesian Optimization in Tuning Problems
- URL: http://arxiv.org/abs/2109.07563v1
- Date: Wed, 15 Sep 2021 20:22:09 GMT
- Title: Non-smooth Bayesian Optimization in Tuning Problems
- Authors: Hengrui Luo, James W. Demmel, Younghyun Cho, Xiaoye S. Li, Yang Liu
- Abstract summary: Building surrogate models is one common approach when we attempt to learn unknown black-box functions.
We propose a novel additive Gaussian process model called clustered Gaussian process (cGP), where the additive components are induced by clustering.
In the examples we studied, the performance can be improved by as much as 90% among repetitive experiments.
- Score: 5.768843113172494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building surrogate models is one common approach when we attempt to learn
unknown black-box functions. Bayesian optimization provides a framework which
allows us to build surrogate models based on sequential samples drawn from the
function and find the optimum. Tuning algorithmic parameters to optimize the
performance of large, complicated "black-box" application codes is a specific
important application, which aims at finding the optima of black-box functions.
Within the Bayesian optimization framework, the Gaussian process model produces
smooth or continuous sample paths. However, the black-box function in the
tuning problem is often non-smooth. This difficult tuning problem is worsened
by the fact that we usually have limited sequential samples from the black-box
function. Motivated by these issues encountered in tuning, we propose a novel
additive Gaussian process model called clustered Gaussian process (cGP), where
the additive components are induced by clustering. In the examples we studied,
the performance can be improved by as much as 90% among repetitive experiments.
By using this surrogate model, we want to capture the non-smoothness of the
black-box function. In addition to an algorithm for constructing this model, we
also apply the model to several artificial and real applications to evaluate
it.
Related papers
- Covariance-Adaptive Sequential Black-box Optimization for Diffusion Targeted Generation [60.41803046775034]
We show how to perform user-preferred targeted generation via diffusion models with only black-box target scores of users.
Experiments on both numerical test problems and target-guided 3D-molecule generation tasks show the superior performance of our method in achieving better target scores.
arXiv Detail & Related papers (2024-06-02T17:26:27Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Bayesian Optimisation for Constrained Problems [0.0]
We propose a novel variant of the well-known Knowledge Gradient acquisition function that allows it to handle constraints.
We empirically compare the new algorithm with four other state-of-the-art constrained Bayesian optimisation algorithms and demonstrate its superior performance.
arXiv Detail & Related papers (2021-05-27T15:43:09Z) - Neural Process for Black-Box Model Optimization Under Bayesian Framework [7.455546102930911]
Black-box models are named in general because they can only be viewed in terms of inputs and outputs, without knowledge of the internal workings.
One powerful algorithm to solve such problem is Bayesian optimization, which can effectively estimates the model parameters that lead to the best performance.
It has been challenging for GP to optimize black-box models that need to query many observations and/or have many parameters.
We propose a general Bayesian optimization algorithm that employs a Neural Process as the surrogate model to perform black-box model optimization.
arXiv Detail & Related papers (2021-04-03T23:35:26Z) - Hyper-optimization with Gaussian Process and Differential Evolution
Algorithm [0.0]
This paper presents specific modifications of Gaussian Process optimization components from available scientific libraries.
presented modifications were submitted to BlackBox 2020 challenge, where it outperformed some conventionally available optimization libraries.
arXiv Detail & Related papers (2021-01-26T08:33:00Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering [53.523517926927894]
We explore the use of exact per-sample Hessian-vector products and gradients to construct self-tuning quadratics.
We prove that our model-based procedure converges in noisy gradient setting.
This is an interesting step for constructing self-tuning quadratics.
arXiv Detail & Related papers (2020-11-09T22:07:30Z) - Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning [100.83444258562263]
We propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting.
In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions.
We are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose.
arXiv Detail & Related papers (2020-01-12T09:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.