Expert-guided Bayesian Optimisation for Human-in-the-loop Experimental
Design of Known Systems
- URL: http://arxiv.org/abs/2312.02852v1
- Date: Tue, 5 Dec 2023 16:09:31 GMT
- Title: Expert-guided Bayesian Optimisation for Human-in-the-loop Experimental
Design of Known Systems
- Authors: Tom Savage, Ehecatl Antonio del Rio Chanona
- Abstract summary: We apply high- throughput (batch) Bayesian optimisation alongside anthropological decision theory to enable domain experts to influence the selection of optimal experiments.
Our methodology exploits the hypothesis that humans are better at making discrete choices than continuous ones and enables experts to influence critical early decisions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Domain experts often possess valuable physical insights that are overlooked
in fully automated decision-making processes such as Bayesian optimisation. In
this article we apply high-throughput (batch) Bayesian optimisation alongside
anthropological decision theory to enable domain experts to influence the
selection of optimal experiments. Our methodology exploits the hypothesis that
humans are better at making discrete choices than continuous ones and enables
experts to influence critical early decisions. At each iteration we solve an
augmented multi-objective optimisation problem across a number of alternate
solutions, maximising both the sum of their utility function values and the
determinant of their covariance matrix, equivalent to their total variability.
By taking the solution at the knee point of the Pareto front, we return a set
of alternate solutions at each iteration that have both high utility values and
are reasonably distinct, from which the expert selects one for evaluation. We
demonstrate that even in the case of an uninformed practitioner, our algorithm
recovers the regret of standard Bayesian optimisation.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Feature-Based Interpretable Surrogates for Optimization [0.8437187555622164]
In this work, we investigate how we can use more general optimization rules to increase interpretability.
The proposed rules do not map to a concrete solution but to a set of solutions characterized by common features.
In particular, we demonstrate the improvement in solution quality that our approach offers compared to existing interpretable surrogates for optimization.
arXiv Detail & Related papers (2024-09-03T13:12:49Z) - Human-Algorithm Collaborative Bayesian Optimization for Engineering Systems [0.0]
We re-introduce the human back into the data-driven decision making loop by outlining an approach for collaborative Bayesian optimization.
Our methodology exploits the hypothesis that humans are more efficient at making discrete choices rather than continuous ones.
We demonstrate our approach across a number of applied and numerical case studies including bioprocess optimization and reactor geometry design.
arXiv Detail & Related papers (2024-04-16T23:17:04Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Discovering Many Diverse Solutions with Bayesian Optimization [7.136022698519586]
We propose Rank-Ordered Bayesian Optimization with Trust-regions (ROBOT)
ROBOT aims to find a portfolio of high-performing solutions that are diverse according to a user-specified diversity metric.
We show that it can discover large sets of high-performing diverse solutions while requiring few additional function evaluations.
arXiv Detail & Related papers (2022-10-20T01:56:38Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Dynamic Multi-objective Ensemble of Acquisition Functions in Batch
Bayesian Optimization [1.1602089225841632]
The acquisition function plays a crucial role in the optimization process.
Three acquisition functions are dynamically selected from a set based on their current and historical performance.
Using an evolutionary multi-objective algorithm to optimize such a MOP, a set of non-dominated solutions can be obtained.
arXiv Detail & Related papers (2022-06-22T14:09:18Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - An Empirical Study of Assumptions in Bayesian Optimisation [61.19427472792523]
In this work we rigorously analyse conventional and non-conventional assumptions inherent to Bayesian optimisation.
We conclude that the majority of hyper- parameter tuning tasks exhibit heteroscedasticity and non-stationarity.
We hope these findings may serve as guiding principles, both for practitioners and for further research in the field.
arXiv Detail & Related papers (2020-12-07T16:21:12Z) - Incorporating Expert Prior Knowledge into Experimental Design via
Posterior Sampling [58.56638141701966]
Experimenters can often acquire the knowledge about the location of the global optimum.
It is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization.
An efficient Bayesian optimization approach has been proposed via posterior sampling on the posterior distribution of the global optimum.
arXiv Detail & Related papers (2020-02-26T01:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.