Robust Bayesian optimization with reinforcement learned acquisition
functions
- URL: http://arxiv.org/abs/2210.00476v1
- Date: Sun, 2 Oct 2022 09:59:06 GMT
- Title: Robust Bayesian optimization with reinforcement learned acquisition
functions
- Authors: Zijing Liu, Xiyao Qu, Xuejun Liu, and Hongqiang Lyu
- Abstract summary: In Bayesian optimization, acquisition function (AF) guides sequential sampling and plays a pivotal role for efficient convergence to better optima.
To address the crux, the idea of data-driven AF selection is proposed.
The sequential AF selection task is formalized as a Markov decision process (MDP) and resort to powerful reinforcement learning (RL) technologies.
- Score: 4.05984965639419
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Bayesian optimization (BO) for expensive black-box optimization tasks,
acquisition function (AF) guides sequential sampling and plays a pivotal role
for efficient convergence to better optima. Prevailing AFs usually rely on
artificial experiences in terms of preferences for exploration or exploitation,
which runs a risk of a computational waste or traps in local optima and
resultant re-optimization. To address the crux, the idea of data-driven AF
selection is proposed, and the sequential AF selection task is further
formalized as a Markov decision process (MDP) and resort to powerful
reinforcement learning (RL) technologies. Appropriate selection policy for AFs
is learned from superior BO trajectories to balance between exploration and
exploitation in real time, which is called reinforcement-learning-assisted
Bayesian optimization (RLABO). Competitive and robust BO evaluations on five
benchmark problems demonstrate RL's recognition of the implicit AF selection
pattern and imply the proposal's potential practicality for intelligent AF
selection as well as efficient optimization in expensive black-box problems.
Related papers
- Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - FunBO: Discovering Acquisition Functions for Bayesian Optimization with FunSearch [21.41322548859776]
We show how FunBO can be used to learn new acquisition functions written in computer code.
We show how FunBO identifies AFs that generalize well in and out of the training distribution of functions.
arXiv Detail & Related papers (2024-06-07T10:49:59Z) - Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation [55.75188191403343]
We introduce utility, which is a function predefined by each user and describes the trade-off between cost and performance of BO.
We validate our algorithm on various LC datasets and found it outperform all the previous multi-fidelity BO and transfer-BO baselines we consider.
arXiv Detail & Related papers (2024-05-28T07:38:39Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Unleashing the Potential of Acquisition Functions in High-Dimensional
Bayesian Optimization [5.349207553730357]
Bayesian optimization is widely used to optimize expensive-to-evaluate black-box functions.
In high-dimensional problems, finding the global maximum of the acquisition function can be difficult.
We propose a better approach by employing multiple data points to leverage the historical capability of black-box optimization.
arXiv Detail & Related papers (2023-02-16T13:56:32Z) - Towards Automated Design of Bayesian Optimization via Exploratory
Landscape Analysis [11.143778114800272]
We show that a dynamic selection of the AF can benefit the BO design.
We pave a way towards AutoML-assisted, on-the-fly BO designs that adjust their behavior on a run-by-run basis.
arXiv Detail & Related papers (2022-11-17T17:15:04Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - Resource Aware Multifidelity Active Learning for Efficient Optimization [0.8717253904965373]
This paper introduces the Resource Aware Active Learning (RAAL) strategy to accelerate the optimization of black box functions.
The RAAL strategy optimally seeds multiple points at each allowing for a major speed up of the optimization task.
arXiv Detail & Related papers (2020-07-09T10:01:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.