Benchmarking the Performance of Bayesian Optimization across Multiple
Experimental Materials Science Domains
- URL: http://arxiv.org/abs/2106.01309v1
- Date: Sun, 23 May 2021 22:04:07 GMT
- Title: Benchmarking the Performance of Bayesian Optimization across Multiple
Experimental Materials Science Domains
- Authors: Qiaohao Liang, Aldair E. Gongora, Zekun Ren, Armi Tiihonen, Zhe Liu,
Shijing Sun, James R. Deneault, Daniil Bash, Flore Mekki-Berrada, Saif A.
Khan, Kedar Hippalgaonkar, Benji Maruyama, Keith A. Brown, John Fisher III,
and Tonio Buonassisi
- Abstract summary: We evaluate the efficiency of BO as a general optimization algorithm across a broad range of experimental materials science domains.
We find that for surrogate model selection, Gaussian Process (GP) with anisotropic kernels (automatic relevance detection, ARD) and Random Forests (RF) have comparable performance and both outperform the commonly used GP without ARD.
- Score: 3.9478770908139085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of machine learning (ML) for materials optimization, active
learning algorithms, such as Bayesian Optimization (BO), have been leveraged
for guiding autonomous and high-throughput experimentation systems. However,
very few studies have evaluated the efficiency of BO as a general optimization
algorithm across a broad range of experimental materials science domains. In
this work, we evaluate the performance of BO algorithms with a collection of
surrogate model and acquisition function pairs across five diverse experimental
materials systems, namely carbon nanotube polymer blends, silver nanoparticles,
lead-halide perovskites, as well as additively manufactured polymer structures
and shapes. By defining acceleration and enhancement metrics for general
materials optimization objectives, we find that for surrogate model selection,
Gaussian Process (GP) with anisotropic kernels (automatic relevance detection,
ARD) and Random Forests (RF) have comparable performance and both outperform
the commonly used GP without ARD. We discuss the implicit distributional
assumptions of RF and GP, and the benefits of using GP with anisotropic kernels
in detail. We provide practical insights for experimentalists on surrogate
model selection of BO during materials optimization campaigns.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Ranking over Regression for Bayesian Optimization and Molecule Selection [0.0680892187976602]
We introduce Rank-based Bayesian Optimization (RBO), which utilizes a ranking model as the surrogate.
We present a comprehensive investigation of RBO's optimization performance compared to conventional BO on various chemical datasets.
We conclude RBO is an effective alternative to regression-based BO, especially for optimizing novel chemical compounds.
arXiv Detail & Related papers (2024-10-11T22:38:14Z) - PMBO: Enhancing Black-Box Optimization through Multivariate Polynomial
Surrogates [0.0]
We introduce a surrogate-based black-box optimization method, termed Polynomial-model-based optimization (PMBO)
We compare the performance of PMBO with several optimization methods for a set of analytic test functions.
Remarkably, PMBO performs comparably with state-of-the-art evolutionary algorithms.
arXiv Detail & Related papers (2024-03-12T10:21:21Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - A dynamic Bayesian optimized active recommender system for
curiosity-driven Human-in-the-loop automated experiments [8.780395483188242]
We present the development of a new type of human in the loop experimental workflow, via a Bayesian optimized active recommender system (BOARS)
This work shows the utility of human-augmented machine learning approaches for curiosity-driven exploration of systems across experimental domains.
arXiv Detail & Related papers (2023-04-05T14:54:34Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Incorporating Expert Prior Knowledge into Experimental Design via
Posterior Sampling [58.56638141701966]
Experimenters can often acquire the knowledge about the location of the global optimum.
It is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization.
An efficient Bayesian optimization approach has been proposed via posterior sampling on the posterior distribution of the global optimum.
arXiv Detail & Related papers (2020-02-26T01:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.