Combining Multi-Fidelity Modelling and Asynchronous Batch Bayesian
Optimization
- URL: http://arxiv.org/abs/2211.06149v1
- Date: Fri, 11 Nov 2022 12:02:40 GMT
- Title: Combining Multi-Fidelity Modelling and Asynchronous Batch Bayesian
Optimization
- Authors: Jose Pablo Folch, Robert M Lee, Behrang Shafei, David Walz, Calvin
Tsay, Mark van der Wilk, Ruth Misener
- Abstract summary: This paper proposes an algorithm combining multi-fidelity and asynchronous batch methods.
We empirically study the algorithm behavior, and show it can outperform single-fidelity batch methods and multi-fidelity sequential methods.
As an application, we consider designing electrode materials for optimal performance in pouch cells using experiments with coin cells to approximate battery performance.
- Score: 10.29946890434873
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian Optimization is a useful tool for experiment design. Unfortunately,
the classical, sequential setting of Bayesian Optimization does not translate
well into laboratory experiments, for instance battery design, where
measurements may come from different sources and their evaluations may require
significant waiting times. Multi-fidelity Bayesian Optimization addresses the
setting with measurements from different sources. Asynchronous batch Bayesian
Optimization provides a framework to select new experiments before the results
of the prior experiments are revealed. This paper proposes an algorithm
combining multi-fidelity and asynchronous batch methods. We empirically study
the algorithm behavior, and show it can outperform single-fidelity batch
methods and multi-fidelity sequential methods. As an application, we consider
designing electrode materials for optimal performance in pouch cells using
experiments with coin cells to approximate battery performance.
Related papers
- Batched Bayesian optimization with correlated candidate uncertainties [44.38372821900645]
We propose an acquisition strategy for discrete optimization motivated by pure exploitation, qPO (multipoint of Optimality)
We apply our method to the model-guided exploration of large chemical libraries and provide empirical evidence that it performs better than or on par with state-of-the-art methods in batched Bayesian optimization.
arXiv Detail & Related papers (2024-10-08T20:13:12Z) - Pessimistic asynchronous sampling in high-cost Bayesian optimization [0.0]
Asynchronous Bayesian optimization is a technique that allows for parallel operation of experimental systems and disjointed systems.
A pessimistic prediction asynchronous policy reached optimum experimental conditions in significantly fewer experiments than equivalent serial policies.
Without accounting for the faster sampling rate, the pessimistic algorithm presented in this work could result in more efficient algorithm driven optimization of high-cost experimental spaces.
arXiv Detail & Related papers (2024-06-21T16:35:27Z) - Optimal Initialization of Batch Bayesian Optimization [0.0]
We propose a batch-design acquisition function that designs a batch by optimization rather than random sampling.
MTV minimizes the variance of the post-evaluation estimates of quality, integrated over the entire space of settings.
arXiv Detail & Related papers (2024-04-27T20:16:58Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - SnAKe: Bayesian Optimization with Pathwise Exploration [9.807656882149319]
We consider a novel setting where the expense of evaluating the function can increase significantly when making large input changes between iterations.
This paper investigates the problem and introduces 'Sequential Bayesian Optimization via Adaptive Connecting Samples' (SnAKe)
It provides a solution by considering future queries and preemptively building optimization paths that minimize input costs.
arXiv Detail & Related papers (2022-01-31T19:42:56Z) - Constrained multi-objective optimization of process design parameters in
settings with scarce data: an application to adhesive bonding [48.7576911714538]
Finding the optimal process parameters for an adhesive bonding process is challenging.
Traditional evolutionary approaches (such as genetic algorithms) are then ill-suited to solve the problem.
In this research, we successfully applied specific machine learning techniques to emulate the objective and constraint functions.
arXiv Detail & Related papers (2021-12-16T10:14:39Z) - Incorporating Expert Prior Knowledge into Experimental Design via
Posterior Sampling [58.56638141701966]
Experimenters can often acquire the knowledge about the location of the global optimum.
It is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization.
An efficient Bayesian optimization approach has been proposed via posterior sampling on the posterior distribution of the global optimum.
arXiv Detail & Related papers (2020-02-26T01:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.