New Paradigms for Exploiting Parallel Experiments in Bayesian
Optimization
- URL: http://arxiv.org/abs/2210.01071v2
- Date: Tue, 4 Oct 2022 16:54:26 GMT
- Title: New Paradigms for Exploiting Parallel Experiments in Bayesian
Optimization
- Authors: Leonardo D. Gonz\'alez and Victor M. Zavala
- Abstract summary: We present new parallel BO paradigms that exploit the structure of the system to partition the design space.
Specifically, we propose an approach that partitions the design space by following the level sets of the performance function.
Our results show that our approaches significantly reduce the required search time and increase the probability of finding a global (rather than local) solution.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimization (BO) is one of the most effective methods for
closed-loop experimental design and black-box optimization. However, a key
limitation of BO is that it is an inherently sequential algorithm (one
experiment is proposed per round) and thus cannot directly exploit
high-throughput (parallel) experiments. Diverse modifications to the BO
framework have been proposed in the literature to enable exploitation of
parallel experiments but such approaches are limited in the degree of
parallelization that they can achieve and can lead to redundant experiments
(thus wasting resources and potentially compromising performance). In this
work, we present new parallel BO paradigms that exploit the structure of the
system to partition the design space. Specifically, we propose an approach that
partitions the design space by following the level sets of the performance
function and an approach that exploits partially-separable structures of the
performance function found. We conduct extensive numerical experiments using a
reactor case study to benchmark the effectiveness of these approaches against a
variety of state-of-the-art parallel algorithms reported in the literature. Our
computational results show that our approaches significantly reduce the
required search time and increase the probability of finding a global (rather
than local) solution.
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Bayesian Experimental Design via Contrastive Diffusions [2.2186678387006435]
Experimental Design (BOED) is a powerful tool to reduce the cost of running a sequence of experiments.
We introduce an it expected posterior distribution with cost-effective properties and provide a tractable access to the EIG contrast.
By incorporating generative models into the BOED framework, we expand its scope and its use in scenarios that were previously impractical.
arXiv Detail & Related papers (2024-10-15T17:53:07Z) - An Adaptive Dimension Reduction Estimation Method for High-dimensional
Bayesian Optimization [6.79843988450982]
We propose a two-step optimization framework to extend BO to high-dimensional settings.
Our algorithm offers the flexibility to operate these steps either concurrently or in sequence.
Numerical experiments validate the efficacy of our method in challenging scenarios.
arXiv Detail & Related papers (2024-03-08T16:21:08Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Differentiable Multi-Target Causal Bayesian Experimental Design [43.76697029708785]
We introduce a gradient-based approach for the problem of Bayesian optimal experimental design to learn causal models in a batch setting.
Existing methods rely on greedy approximations to construct a batch of experiments.
We propose a conceptually simple end-to-end gradient-based optimization procedure to acquire a set of optimal intervention target-state pairs.
arXiv Detail & Related papers (2023-02-21T11:32:59Z) - Optimizing Sequential Experimental Design with Deep Reinforcement
Learning [7.589363597086081]
We show that the problem of optimizing policies can be reduced to solving a Markov decision process (MDP)
Our approach is also computationally efficient at deployment time and exhibits state-of-the-art performance on both continuous and discrete design spaces.
arXiv Detail & Related papers (2022-02-02T00:23:05Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Optimal Bayesian experimental design for subsurface flow problems [77.34726150561087]
We propose a novel approach for development of chaos expansion (PCE) surrogate model for the design utility function.
This novel technique enables the derivation of a reasonable quality response surface for the targeted objective function with a computational budget comparable to several single-point evaluations.
arXiv Detail & Related papers (2020-08-10T09:42:59Z) - Simple and Scalable Parallelized Bayesian Optimization [2.512827436728378]
We propose a simple and scalable BO method for asynchronous parallel settings.
Experiments are carried out with a benchmark function and hyperparameter optimization of multi-layer perceptrons.
arXiv Detail & Related papers (2020-06-24T10:25:27Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z) - Incorporating Expert Prior Knowledge into Experimental Design via
Posterior Sampling [58.56638141701966]
Experimenters can often acquire the knowledge about the location of the global optimum.
It is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization.
An efficient Bayesian optimization approach has been proposed via posterior sampling on the posterior distribution of the global optimum.
arXiv Detail & Related papers (2020-02-26T01:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.