Discovering Many Diverse Solutions with Bayesian Optimization
- URL: http://arxiv.org/abs/2210.10953v4
- Date: Tue, 2 May 2023 21:49:15 GMT
- Title: Discovering Many Diverse Solutions with Bayesian Optimization
- Authors: Natalie Maus and Kaiwen Wu and David Eriksson and Jacob Gardner
- Abstract summary: We propose Rank-Ordered Bayesian Optimization with Trust-regions (ROBOT)
ROBOT aims to find a portfolio of high-performing solutions that are diverse according to a user-specified diversity metric.
We show that it can discover large sets of high-performing diverse solutions while requiring few additional function evaluations.
- Score: 7.136022698519586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian optimization (BO) is a popular approach for sample-efficient
optimization of black-box objective functions. While BO has been successfully
applied to a wide range of scientific applications, traditional approaches to
single-objective BO only seek to find a single best solution. This can be a
significant limitation in situations where solutions may later turn out to be
intractable. For example, a designed molecule may turn out to violate
constraints that can only be reasonably evaluated after the optimization
process has concluded. To address this issue, we propose Rank-Ordered Bayesian
Optimization with Trust-regions (ROBOT) which aims to find a portfolio of
high-performing solutions that are diverse according to a user-specified
diversity metric. We evaluate ROBOT on several real-world applications and show
that it can discover large sets of high-performing diverse solutions while
requiring few additional function evaluations compared to finding a single best
solution.
Related papers
- Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization [52.80408805368928]
We introduce a novel greedy-style subset selection algorithm for batch acquisition.
Our experiments on the red fluorescent proteins show that our proposed method achieves the baseline performance in 1.69x fewer queries.
arXiv Detail & Related papers (2024-06-21T05:57:08Z) - Few for Many: Tchebycheff Set Scalarization for Many-Objective Optimization [14.355588194787073]
Multi-objective optimization can be found in many real-world applications where some conflicting objectives can not be optimized by a single solution.
We propose a novel Tchebycheff set scalarization method to find a few representative solutions to cover a large number of objectives.
In this way, each objective can be well addressed by at least one solution in the small solution set.
arXiv Detail & Related papers (2024-05-30T03:04:57Z) - Multi-Objective Bayesian Optimization with Active Preference Learning [18.066263838953223]
We propose a Bayesian optimization (BO) approach to identifying the most preferred solution in a multi-objective optimization (MOO) problem.
To minimize the interaction cost with the decision maker (DM), we also propose an active learning strategy for the preference estimation.
arXiv Detail & Related papers (2023-11-22T15:24:36Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - A Bayesian Optimization Framework for Finding Local Optima in Expensive
Multi-Modal Functions [18.570591025615453]
This paper develops a multimodal BO framework to find a set of local/global solutions for expensive-to-evaluate multimodal objective functions.
We analytically derive the joint distribution of the objective function and its first-order derivatives.
We introduce variants of the well-known BO acquisition functions to the multimodal setting and demonstrate the performance of the proposed framework.
arXiv Detail & Related papers (2022-10-13T00:10:13Z) - Joint Entropy Search for Multi-objective Bayesian Optimization [0.0]
We propose a novel information-theoretic acquisition function for BO called Joint Entropy Search.
We showcase the effectiveness of this new approach on a range of synthetic and real-world problems in terms of the hypervolume and its weighted variants.
arXiv Detail & Related papers (2022-10-06T13:19:08Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Multi-Objective Bayesian Optimization over High-Dimensional Search
Spaces [16.368143857907]
MORBO is a method for multi-objective Bayesian optimization over high-dimensional search spaces.
We show that MORBO significantly advances the state-of-the-art in sample-efficiency for several high-dimensional synthetic and real-world multi-objective problems.
arXiv Detail & Related papers (2021-09-22T18:30:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.