Domain Knowledge Injection in Bayesian Search for New Materials
- URL: http://arxiv.org/abs/2311.15162v1
- Date: Sun, 26 Nov 2023 01:55:55 GMT
- Title: Domain Knowledge Injection in Bayesian Search for New Materials
- Authors: Zikai Xie, Xenophon Evangelopoulos, Joseph Thacker, Andrew Cooper
- Abstract summary: We propose DKIBO, a Bayesian optimization (BO) algorithm that accommodates domain knowledge to tune exploration in the search space.
We empirically demonstrate the practical utility of the proposed method by successfully injecting domain knowledge in a materials design task.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper we propose DKIBO, a Bayesian optimization (BO) algorithm that
accommodates domain knowledge to tune exploration in the search space. Bayesian
optimization has recently emerged as a sample-efficient optimizer for many
intractable scientific problems. While various existing BO frameworks allow the
input of prior beliefs to accelerate the search by narrowing down the space,
incorporating such knowledge is not always straightforward and can often
introduce bias and lead to poor performance. Here we propose a simple approach
to incorporate structural knowledge in the acquisition function by utilizing an
additional deterministic surrogate model to enrich the approximation power of
the Gaussian process. This is suitably chosen according to structural
information of the problem at hand and acts a corrective term towards a
better-informed sampling. We empirically demonstrate the practical utility of
the proposed method by successfully injecting domain knowledge in a materials
design task. We further validate our method's performance on different
experimental settings and ablation analyses.
Related papers
- Human-Algorithm Collaborative Bayesian Optimization for Engineering Systems [0.0]
We re-introduce the human back into the data-driven decision making loop by outlining an approach for collaborative Bayesian optimization.
Our methodology exploits the hypothesis that humans are more efficient at making discrete choices rather than continuous ones.
We demonstrate our approach across a number of applied and numerical case studies including bioprocess optimization and reactor geometry design.
arXiv Detail & Related papers (2024-04-16T23:17:04Z) - A Framework for Guided Motion Planning [1.179253400575852]
We formalize the notion of guided search by defining the concept of a guiding space.
This new language encapsulates many seemingly distinct prior methods under the same framework.
We suggest an information theoretic method to evaluate guidance, which experimentally matches intuition when tested on known algorithms.
arXiv Detail & Related papers (2024-04-04T00:58:19Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Physics-Aware Multifidelity Bayesian Optimization: a Generalized Formulation [0.0]
Multifidelity Bayesian methods (MFBO) allow to include costly high-fidelity responses for a sub-selection of queries only.
State-of-the-art methods rely on a purely data-driven search and do not include explicit information about the physical context.
This paper acknowledges that prior knowledge about the physical domains of engineering problems can be leveraged to accelerate these data-driven searches.
arXiv Detail & Related papers (2023-12-10T09:11:53Z) - STEERING: Stein Information Directed Exploration for Model-Based
Reinforcement Learning [111.75423966239092]
We propose an exploration incentive in terms of the integral probability metric (IPM) between a current estimate of the transition model and the unknown optimal.
Based on KSD, we develop a novel algorithm algo: textbfSTEin information dirtextbfEcted exploration for model-based textbfReinforcement LearntextbfING.
arXiv Detail & Related papers (2023-01-28T00:49:28Z) - High-Dimensional Bayesian Optimisation with Variational Autoencoders and
Deep Metric Learning [119.91679702854499]
We introduce a method based on deep metric learning to perform Bayesian optimisation over high-dimensional, structured input spaces.
We achieve such an inductive bias using just 1% of the available labelled data.
As an empirical contribution, we present state-of-the-art results on real-world high-dimensional black-box optimisation problems.
arXiv Detail & Related papers (2021-06-07T13:35:47Z) - Good practices for Bayesian Optimization of high dimensional structured
spaces [15.488642552157131]
We study the effect of different search space design choices for performing Bayesian Optimization in high dimensional structured datasets.
We evaluate new methods to automatically define the optimization bounds in the latent space.
We provide recommendations for the practitioners.
arXiv Detail & Related papers (2020-12-31T07:00:39Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z) - DrNAS: Dirichlet Neural Architecture Search [88.56953713817545]
We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution.
With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based generalization.
To alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme.
arXiv Detail & Related papers (2020-06-18T08:23:02Z) - Incorporating Expert Prior Knowledge into Experimental Design via
Posterior Sampling [58.56638141701966]
Experimenters can often acquire the knowledge about the location of the global optimum.
It is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization.
An efficient Bayesian optimization approach has been proposed via posterior sampling on the posterior distribution of the global optimum.
arXiv Detail & Related papers (2020-02-26T01:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.