Triangulation candidates for Bayesian optimization
- URL: http://arxiv.org/abs/2112.07457v1
- Date: Tue, 14 Dec 2021 15:13:31 GMT
- Title: Triangulation candidates for Bayesian optimization
- Authors: Robert B. Gramacy, Annie Sauer, Nathan Wycoff
- Abstract summary: Bayesian optimization is a form of design to idealize input-output relationships with a suitably flexible regression model.
Here we propose using candidates based a Delaunay triangulation, based on a simple conventional convex library.
- Score: 0.3222802562733786
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimization is a form of sequential design: idealize input-output
relationships with a suitably flexible nonlinear regression model; fit to data
from an initial experimental campaign; devise and optimize a criterion for
selecting the next experimental condition(s) under the fitted model (e.g., via
predictive equations) to target outcomes of interest (say minima); repeat after
acquiring output under those conditions and updating the fit. In many
situations this "inner optimization" over the new-data acquisition criterion is
cumbersome because it is non-convex/highly multi-modal, may be
non-differentiable, or may otherwise thwart numerical optimizers, especially
when inference requires Monte Carlo. In such cases it is not uncommon to
replace continuous search with a discrete one over random candidates. Here we
propose using candidates based on a Delaunay triangulation of the existing
input design. In addition to detailing construction of these "tricands", based
on a simple wrapper around a conventional convex hull library, we promote
several advantages based on properties of the geometric criterion involved. We
then demonstrate empirically how tricands can lead to better Bayesian
optimization performance compared to both numerically optimized acquisitions
and random candidate-based alternatives on benchmark problems.
Related papers
- Predicting from Strings: Language Model Embeddings for Bayesian Optimization [21.370382766970877]
We propose Embed-then-Regress, a paradigm for applying in-context regression over string inputs.
By expressing all inputs as strings we are able to perform general-purpose regression for Optimization over various domains.
arXiv Detail & Related papers (2024-10-14T06:22:11Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Simulation Based Bayesian Optimization [0.6526824510982799]
This paper introduces Simulation Based Bayesian Optimization (SBBO) as a novel approach to optimizing acquisition functions.
SBBO allows the use of surrogate models tailored for spaces with discrete variables.
We demonstrate empirically the effectiveness of SBBO method using various choices of surrogate models.
arXiv Detail & Related papers (2024-01-19T16:56:11Z) - Optimize-via-Predict: Realizing out-of-sample optimality in data-driven
optimization [0.0]
We examine a formulation for data-driven optimization wherein the decision-maker is not privy to the true distribution.
We define a prescriptive solution as a decisionvendor rule mapping such a data set to decisions.
We present an optimization problem that would solve for such an out-of-sample optimal solution, and does so efficiently by a combination of sampling and bisection search algorithms.
arXiv Detail & Related papers (2023-09-20T08:48:50Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - Fast Feature Selection with Fairness Constraints [49.142308856826396]
We study the fundamental problem of selecting optimal features for model construction.
This problem is computationally challenging on large datasets, even with the use of greedy algorithm variants.
We extend the adaptive query model, recently proposed for the greedy forward selection for submodular functions, to the faster paradigm of Orthogonal Matching Pursuit for non-submodular functions.
The proposed algorithm achieves exponentially fast parallel run time in the adaptive query model, scaling much better than prior work.
arXiv Detail & Related papers (2022-02-28T12:26:47Z) - Bayesian Joint Chance Constrained Optimization: Approximations and
Statistical Consistency [10.20554144865699]
We focus on the question of statistical consistency of the optimal value, computed using an approximate posterior distribution.
We also prove the feasibility of the approximate optimization problem.
We also demonstrate the utility of our approach on an optimal staffing problem for an M/M/c queueing model.
arXiv Detail & Related papers (2021-06-23T07:11:39Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Preference learning along multiple criteria: A game-theoretic
perspective [97.94912276610002]
We generalize the notion of a von Neumann winner to the multi-criteria setting by taking inspiration from Blackwell's approachability.
Our framework allows for non-linear aggregation of preferences across criteria, and generalizes the linearization-based approach from multi-objective optimization.
We show that the Blackwell winner of a multi-criteria problem instance can be computed as the solution to a convex optimization problem.
arXiv Detail & Related papers (2021-05-05T03:23:11Z) - Non-Adaptive Adaptive Sampling on Turnstile Streams [57.619901304728366]
We give the first relative-error algorithms for column subset selection, subspace approximation, projective clustering, and volume on turnstile streams that use space sublinear in $n$.
Our adaptive sampling procedure has a number of applications to various data summarization problems that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model.
arXiv Detail & Related papers (2020-04-23T05:00:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.