Multi-Fidelity Bayesian Optimization with Unreliable Information Sources
- URL: http://arxiv.org/abs/2210.13937v1
- Date: Tue, 25 Oct 2022 11:47:33 GMT
- Title: Multi-Fidelity Bayesian Optimization with Unreliable Information Sources
- Authors: Petrus Mikkola, Julien Martinelli, Louis Filstroff, Samuel Kaski
- Abstract summary: We propose rMFBO (robust MFBO) to make GP-based MFBO schemes robust to the addition of unreliable information sources.
We demonstrate the effectiveness of the proposed methodology on a number of numerical benchmarks.
We expect rMFBO to be particularly useful to reliably include human experts with varying knowledge within BO processes.
- Score: 12.509709549771385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimization (BO) is a powerful framework for optimizing black-box,
expensive-to-evaluate functions. Over the past decade, many algorithms have
been proposed to integrate cheaper, lower-fidelity approximations of the
objective function into the optimization process, with the goal of converging
towards the global optimum at a reduced cost. This task is generally referred
to as multi-fidelity Bayesian optimization (MFBO). However, MFBO algorithms can
lead to higher optimization costs than their vanilla BO counterparts,
especially when the low-fidelity sources are poor approximations of the
objective function, therefore defeating their purpose. To address this issue,
we propose rMFBO (robust MFBO), a methodology to make any GP-based MFBO scheme
robust to the addition of unreliable information sources. rMFBO comes with a
theoretical guarantee that its performance can be bound to its vanilla BO
analog, with high controllable probability. We demonstrate the effectiveness of
the proposed methodology on a number of numerical benchmarks, outperforming
earlier MFBO methods on unreliable sources. We expect rMFBO to be particularly
useful to reliably include human experts with varying knowledge within BO
processes.
Related papers
- Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation [55.75188191403343]
We introduce utility, which is a function predefined by each user and describes the trade-off between cost and performance of BO.
We validate our algorithm on various LC datasets and found it outperform all the previous multi-fidelity BO and transfer-BO baselines we consider.
arXiv Detail & Related papers (2024-05-28T07:38:39Z) - Large Language Models to Enhance Bayesian Optimization [57.474613739645605]
We present LLAMBO, a novel approach that integrates the capabilities of Large Language Models (LLM) within Bayesian optimization.
At a high level, we frame the BO problem in natural language, enabling LLMs to iteratively propose and evaluate promising solutions conditioned on historical evaluations.
Our findings illustrate that LLAMBO is effective at zero-shot warmstarting, and enhances surrogate modeling and candidate sampling, especially in the early stages of search when observations are sparse.
arXiv Detail & Related papers (2024-02-06T11:44:06Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - A General Framework for User-Guided Bayesian Optimization [51.96352579696041]
We propose ColaBO, the first Bayesian-principled framework for prior beliefs beyond the typical kernel structure.
We empirically demonstrate ColaBO's ability to substantially accelerate optimization when the prior information is accurate, and to retain approximately default performance when it is misleading.
arXiv Detail & Related papers (2023-11-24T18:27:26Z) - Multi-fidelity Bayesian Optimization in Engineering Design [3.9160947065896803]
Multi-fidelity optimization (MFO) and Bayesian optimization (BO)
MF BO has found a niche in solving expensive engineering design optimization problems.
Recent developments of two essential ingredients of MF BO: GP-based MF surrogates and acquisition functions.
arXiv Detail & Related papers (2023-11-21T23:22:11Z) - LABCAT: Locally adaptive Bayesian optimization using principal-component-aligned trust regions [0.0]
We propose the LABCAT algorithm, which extends trust-region-based BO.
We show that the algorithm outperforms several state-of-the-art BO and other black-box optimization algorithms.
arXiv Detail & Related papers (2023-11-19T13:56:24Z) - Bayesian Optimization for Function Compositions with Applications to
Dynamic Pricing [0.0]
We propose a practical BO method of function compositions where the form of the composition is known but the constituent functions are expensive to evaluate.
We demonstrate a novel application to dynamic pricing in revenue management when the underlying demand function is expensive to evaluate.
arXiv Detail & Related papers (2023-03-21T15:45:06Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - A General Recipe for Likelihood-free Bayesian Optimization [115.82591413062546]
We propose likelihood-free BO (LFBO) to extend BO to a broader class of models and utilities.
LFBO directly models the acquisition function without having to separately perform inference with a probabilistic surrogate model.
We show that computing the acquisition function in LFBO can be reduced to optimizing a weighted classification problem.
arXiv Detail & Related papers (2022-06-27T03:55:27Z) - $\pi$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian
Optimization [40.30019289383378]
We propose $pi$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum.
In contrast to previous approaches, $pi$BO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions.
We also demonstrate that $pi$BO improves on the state-of-the-art performance for a popular deep learning task, with a 12.5 $times$ time-to-accuracy speedup over prominent BO approaches.
arXiv Detail & Related papers (2022-04-23T11:07:13Z) - Multi-Fidelity Bayesian Optimization via Deep Neural Networks [19.699020509495437]
In many applications, the objective function can be evaluated at multiple fidelities to enable a trade-off between the cost and accuracy.
We propose Deep Neural Network Multi-Fidelity Bayesian Optimization (DNN-MFBO) that can flexibly capture all kinds of complicated relationships between the fidelities.
We show the advantages of our method in both synthetic benchmark datasets and real-world applications in engineering design.
arXiv Detail & Related papers (2020-07-06T23:28:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.