Output Space Entropy Search Framework for Multi-Objective Bayesian
Optimization
- URL: http://arxiv.org/abs/2110.06980v1
- Date: Wed, 13 Oct 2021 18:43:39 GMT
- Title: Output Space Entropy Search Framework for Multi-Objective Bayesian
Optimization
- Authors: Syrine Belakaria, Aryan Deshwal, Janardhan Rao Doppa
- Abstract summary: Black-box multi-objective optimization (MOO) using expensive function evaluations (also referred to as experiments)
We propose a general framework for solving MOO problems based on the principle of output space entropy (OSE) search.
Our OSE search based algorithms improve over state-of-the-art methods in terms of both computational-efficiency and accuracy of MOO solutions.
- Score: 32.856318660282255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of black-box multi-objective optimization (MOO) using
expensive function evaluations (also referred to as experiments), where the
goal is to approximate the true Pareto set of solutions by minimizing the total
resource cost of experiments. For example, in hardware design optimization, we
need to find the designs that trade-off performance, energy, and area overhead
using expensive computational simulations. The key challenge is to select the
sequence of experiments to uncover high-quality solutions using minimal
resources. In this paper, we propose a general framework for solving MOO
problems based on the principle of output space entropy (OSE) search: select
the experiment that maximizes the information gained per unit resource cost
about the true Pareto front. We appropriately instantiate the principle of OSE
search to derive efficient algorithms for the following four MOO problem
settings: 1) The most basic em single-fidelity setting, where experiments are
expensive and accurate; 2) Handling em black-box constraints} which cannot be
evaluated without performing experiments; 3) The discrete multi-fidelity
setting, where experiments can vary in the amount of resources consumed and
their evaluation accuracy; and 4) The em continuous-fidelity setting, where
continuous function approximations result in a huge space of experiments.
Experiments on diverse synthetic and real-world benchmarks show that our OSE
search based algorithms improve over state-of-the-art methods in terms of both
computational-efficiency and accuracy of MOO solutions.
Related papers
- Synthetic Principal Component Design: Fast Covariate Balancing with
Synthetic Controls [16.449993388646277]
We develop a globally convergent and practically efficient optimization algorithm.
We establish the first global optimality guarantee for experiment design when pre-treatment data is sampled from certain data-generating processes.
arXiv Detail & Related papers (2022-11-28T11:45:54Z) - Fast Bayesian Optimization of Needle-in-a-Haystack Problems using
Zooming Memory-Based Initialization [73.96101108943986]
A Needle-in-a-Haystack problem arises when there is an extreme imbalance of optimum conditions relative to the size of the dataset.
We present a Zooming Memory-Based Initialization algorithm that builds on conventional Bayesian optimization principles.
arXiv Detail & Related papers (2022-08-26T23:57:41Z) - Uncertainty-Aware Search Framework for Multi-Objective Bayesian
Optimization [40.40632890861706]
We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations.
We propose a novel uncertainty-aware search framework referred to as USeMO to efficiently select the sequence of inputs for evaluation.
arXiv Detail & Related papers (2022-04-12T16:50:48Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Reinforcement Learning based Sequential Batch-sampling for Bayesian
Optimal Experimental Design [1.6249267147413522]
Sequential design of experiments (SDOE) is a popular suite of methods, that has yielded promising results in recent years.
In this work, we aim to extend the SDOE strategy, to query the experiment or computer code at a batch of inputs.
A unique capability of the proposed methodology is its ability to be applied to multiple tasks, for example optimization of a function, once its trained.
arXiv Detail & Related papers (2021-12-21T02:25:23Z) - Constrained multi-objective optimization of process design parameters in
settings with scarce data: an application to adhesive bonding [48.7576911714538]
Finding the optimal process parameters for an adhesive bonding process is challenging.
Traditional evolutionary approaches (such as genetic algorithms) are then ill-suited to solve the problem.
In this research, we successfully applied specific machine learning techniques to emulate the objective and constraint functions.
arXiv Detail & Related papers (2021-12-16T10:14:39Z) - USCO-Solver: Solving Undetermined Stochastic Combinatorial Optimization
Problems [9.015720257837575]
We consider the regression between spaces, aiming to infer high-quality optimization solutions from samples of input-solution pairs.
For learning foundations, we present learning-error analysis under the PAC-Bayesian framework.
We obtain highly encouraging experimental results for several classic problems on both synthetic and real-world datasets.
arXiv Detail & Related papers (2021-07-15T17:59:08Z) - Multi-Fidelity Multi-Objective Bayesian Optimization: An Output Space
Entropy Search Approach [44.25245545568633]
We study the novel problem of blackbox optimization of multiple objectives via multi-fidelity function evaluations.
Our experiments on several synthetic and real-world benchmark problems show that MF-OSEMO, with both approximations, significantly improves over the state-of-the-art single-fidelity algorithms.
arXiv Detail & Related papers (2020-11-02T06:59:04Z) - Optimal Bayesian experimental design for subsurface flow problems [77.34726150561087]
We propose a novel approach for development of chaos expansion (PCE) surrogate model for the design utility function.
This novel technique enables the derivation of a reasonable quality response surface for the targeted objective function with a computational budget comparable to several single-point evaluations.
arXiv Detail & Related papers (2020-08-10T09:42:59Z) - Incorporating Expert Prior Knowledge into Experimental Design via
Posterior Sampling [58.56638141701966]
Experimenters can often acquire the knowledge about the location of the global optimum.
It is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization.
An efficient Bayesian optimization approach has been proposed via posterior sampling on the posterior distribution of the global optimum.
arXiv Detail & Related papers (2020-02-26T01:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.