Two-step Lookahead Bayesian Optimization with Inequality Constraints
- URL: http://arxiv.org/abs/2112.02833v1
- Date: Mon, 6 Dec 2021 07:40:54 GMT
- Title: Two-step Lookahead Bayesian Optimization with Inequality Constraints
- Authors: Yunxiang Zhang, Xiangyu Zhang, Peter I. Frazier
- Abstract summary: We propose a two-step lookahead constrained Bayesian optimization acquisition function (2-OPT-C) supporting both sequential and batch settings.
In numerical experiments, 2-OPT-C typically improves query efficiency by 2x or more over previous methods, and in some cases by 10x or more.
- Score: 21.703234193908038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in computationally efficient non-myopic Bayesian optimization
(BO) improve query efficiency over traditional myopic methods like expected
improvement while only modestly increasing computational cost. These advances
have been largely limited, however, to unconstrained optimization. For
constrained optimization, the few existing non-myopic BO methods require heavy
computation. For instance, one existing non-myopic constrained BO method [Lam
and Willcox, 2017] relies on computationally expensive unreliable brute-force
derivative-free optimization of a Monte Carlo rollout acquisition function.
Methods that use the reparameterization trick for more efficient
derivative-based optimization of non-myopic acquisition functions in the
unconstrained setting, like sample average approximation and infinitesimal
perturbation analysis, do not extend: constraints introduce discontinuities in
the sampled acquisition function surface that hinder its optimization.
Moreover, we argue here that being non-myopic is even more important in
constrained problems because fear of violating constraints pushes myopic
methods away from sampling the boundary between feasible and infeasible
regions, slowing the discovery of optimal solutions with tight constraints. In
this paper, we propose a computationally efficient two-step lookahead
constrained Bayesian optimization acquisition function (2-OPT-C) supporting
both sequential and batch settings. To enable fast acquisition function
optimization, we develop a novel likelihood-ratio-based unbiased estimator of
the gradient of the two-step optimal acquisition function that does not use the
reparameterization trick. In numerical experiments, 2-OPT-C typically improves
query efficiency by 2x or more over previous methods, and in some cases by 10x
or more.
Related papers
- Bayesian Optimization for Non-Convex Two-Stage Stochastic Optimization Problems [2.9016548477524156]
We formulate a knowledgeient-based acquisition function to jointly optimize the first and second-stage variables.
We show that differences in the dimension and length scales between the variable types can lead to inefficiencies of the twostep algorithm.
arXiv Detail & Related papers (2024-08-30T16:26:31Z) - BO4IO: A Bayesian optimization approach to inverse optimization with uncertainty quantification [5.031974232392534]
This work addresses data-driven inverse optimization (IO)
The goal is to estimate unknown parameters in an optimization model from observed decisions that can be assumed to be optimal or near-optimal.
arXiv Detail & Related papers (2024-05-28T06:52:17Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Reducing the Variance of Gaussian Process Hyperparameter Optimization
with Preconditioning [54.01682318834995]
Preconditioning is a highly effective step for any iterative method involving matrix-vector multiplication.
We prove that preconditioning has an additional benefit that has been previously unexplored.
It simultaneously can reduce variance at essentially negligible cost.
arXiv Detail & Related papers (2021-07-01T06:43:11Z) - An Efficient Batch Constrained Bayesian Optimization Approach for Analog
Circuit Synthesis via Multi-objective Acquisition Ensemble [11.64233949999656]
We propose an efficient parallelizable Bayesian optimization algorithm via Multi-objective ACquisition function Ensemble (MACE)
Our proposed algorithm can reduce the overall simulation time by up to 74 times compared to differential evolution (DE) for the unconstrained optimization problem when the batch size is 15.
For the constrained optimization problem, our proposed algorithm can speed up the optimization process by up to 15 times compared to the weighted expected improvement based Bayesian optimization (WEIBO) approach, when the batch size is 15.
arXiv Detail & Related papers (2021-06-28T13:21:28Z) - Ada-BKB: Scalable Gaussian Process Optimization on Continuous Domain by
Adaptive Discretization [21.859940486704264]
An algorithm such as GPUCB has prohibitive computational complexity.
A norere algorithm for functions corroborates the real problem of continuous optimization.
arXiv Detail & Related papers (2021-06-16T07:55:45Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Using models to improve optimizers for variational quantum algorithms [1.7475326826331605]
Variational quantum algorithms are a leading candidate for early applications on noisy intermediate-scale quantum computers.
These algorithms depend on a classical optimization outer-loop that minimizes some function of a parameterized quantum circuit.
We introduce two optimization methods and numerically compare their performance with common methods in use today.
arXiv Detail & Related papers (2020-05-22T05:23:23Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.