Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation
- URL: http://arxiv.org/abs/2307.13371v1
- Date: Tue, 25 Jul 2023 09:45:47 GMT
- Title: Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation
- Authors: Fengxue Zhang, Jialin Song, James Bowden, Alexander Ladd, Yisong Yue,
Thomas A. Desautels, Yuxin Chen
- Abstract summary: We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
- Score: 84.0621253654014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study Bayesian optimization (BO) in high-dimensional and non-stationary
scenarios. Existing algorithms for such scenarios typically require extensive
hyperparameter tuning, which limits their practical effectiveness. We propose a
framework, called BALLET, which adaptively filters for a high-confidence region
of interest (ROI) as a superlevel-set of a nonparametric probabilistic model
such as a Gaussian process (GP). Our approach is easy to tune, and is able to
focus on local region of the optimization space that can be tackled by existing
BO methods. The key idea is to use two probabilistic models: a coarse GP to
identify the ROI, and a localized GP for optimization within the ROI. We show
theoretically that BALLET can efficiently shrink the search space, and can
exhibit a tighter regret bound than standard BO without ROI filtering. We
demonstrate empirically the effectiveness of BALLET on both synthetic and
real-world optimization tasks.
Related papers
- Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - High-dimensional Bayesian Optimization via Covariance Matrix Adaptation
Strategy [16.521207412129833]
We propose a novel technique for defining the local regions using the Covariance Matrix Adaptation (CMA) strategy.
Based on this search distribution, we then define the local regions consisting of data points with high probabilities of being the global optimum.
Our approach serves as a meta-algorithm as it can incorporate existing black-box BOs, such as BO, TuRBO, and BAxUS, to find the global optimum.
arXiv Detail & Related papers (2024-02-05T15:32:10Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - LABCAT: Locally adaptive Bayesian optimization using principal-component-aligned trust regions [0.0]
We propose the LABCAT algorithm, which extends trust-region-based BO.
We show that the algorithm outperforms several state-of-the-art BO and other black-box optimization algorithms.
arXiv Detail & Related papers (2023-11-19T13:56:24Z) - CARE: Confidence-rich Autonomous Robot Exploration using Bayesian Kernel
Inference and Optimization [12.32946442160165]
We consider improving the efficiency of information-based autonomous robot exploration in unknown and complex environments.
We propose a novel lightweight information gain inference method based on Bayesian kernel inference and optimization (BKIO)
We show the desired efficiency of our proposed methods without losing exploration performance in different unstructured, cluttered environments.
arXiv Detail & Related papers (2023-09-11T02:30:06Z) - Pre-training helps Bayesian optimization too [49.28382118032923]
We seek an alternative practice for setting functional priors.
In particular, we consider the scenario where we have data from similar functions that allow us to pre-train a tighter distribution a priori.
Our results show that our method is able to locate good hyper parameters at least 3 times more efficiently than the best competing methods.
arXiv Detail & Related papers (2022-07-07T04:42:54Z) - Sparse Bayesian Optimization [16.867375370457438]
We present several regularization-based approaches that allow us to discover sparse and more interpretable configurations.
We propose a novel differentiable relaxation based on homotopy continuation that makes it possible to target sparsity.
We show that we are able to efficiently optimize for sparsity.
arXiv Detail & Related papers (2022-03-03T18:25:33Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Learning Space Partitions for Path Planning [54.475949279050596]
PlaLaM outperforms existing path planning methods in 2D navigation tasks, especially in the presence of difficult-to-escape local optima.
These gains transfer to highly multimodal real-world tasks, where we outperform strong baselines in compiler phase ordering by up to 245% and in molecular design by up to 0.4 on properties on a 0-1 scale.
arXiv Detail & Related papers (2021-06-19T18:06:11Z) - Stochastic Optimization of Areas Under Precision-Recall Curves with
Provable Convergence [66.83161885378192]
Area under ROC (AUROC) and precision-recall curves (AUPRC) are common metrics for evaluating classification performance for imbalanced problems.
We propose a technical method to optimize AUPRC for deep learning.
arXiv Detail & Related papers (2021-04-18T06:22:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.