Many Objective Bayesian Optimization
- URL: http://arxiv.org/abs/2107.04126v1
- Date: Thu, 8 Jul 2021 21:57:07 GMT
- Title: Many Objective Bayesian Optimization
- Authors: Lucia Asencio Mart\'in, Eduardo C. Garrido-Merch\'an
- Abstract summary: Multi-objective Bayesian optimization (MOBO) is a set of methods that has been successfully applied for the simultaneous optimization of black-boxes.
In particular, MOBO methods have problems when the number of objectives in a multi-objective optimization problem are 3 or more, which is the many objective setting.
We show empirical evidence in a set of toy, synthetic, benchmark and real experiments that GPs predictive distributions of the effectiveness of the metric and the algorithm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Some real problems require the evaluation of expensive and noisy objective
functions. Moreover, the analytical expression of these objective functions may
be unknown. These functions are known as black-boxes, for example, estimating
the generalization error of a machine learning algorithm and computing its
prediction time in terms of its hyper-parameters. Multi-objective Bayesian
optimization (MOBO) is a set of methods that has been successfully applied for
the simultaneous optimization of black-boxes. Concretely, BO methods rely on a
probabilistic model of the objective functions, typically a Gaussian process.
This model generates a predictive distribution of the objectives. However, MOBO
methods have problems when the number of objectives in a multi-objective
optimization problem are 3 or more, which is the many objective setting. In
particular, the BO process is more costly as more objectives are considered,
computing the quality of the solution via the hyper-volume is also more costly
and, most importantly, we have to evaluate every objective function, wasting
expensive computational, economic or other resources. However, as more
objectives are involved in the optimization problem, it is highly probable that
some of them are redundant and not add information about the problem solution.
A measure that represents how similar are GP predictive distributions is
proposed. We also propose a many objective Bayesian optimization algorithm that
uses this metric to determine whether two objectives are redundant. The
algorithm stops evaluating one of them if the similarity is found, saving
resources and not hurting the performance of the multi-objective BO algorithm.
We show empirical evidence in a set of toy, synthetic, benchmark and real
experiments that GPs predictive distributions of the effectiveness of the
metric and the algorithm.
Related papers
- Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Parallel Multi-Objective Hyperparameter Optimization with Uniform
Normalization and Bounded Objectives [5.94867851915494]
We propose a multi-objective Bayesian optimization (MoBO) algorithm that addresses these problems.
We increase the efficiency of our approach by imposing constraints on the objective to avoid exploring unnecessary configurations.
Finally, we leverage an approach to parallelize the MoBO which results in a 5x speed-up when using 16x more workers.
arXiv Detail & Related papers (2023-09-26T13:48:04Z) - Polynomial-Model-Based Optimization for Blackbox Objectives [0.0]
Black-box optimization seeks to find optimal parameters for systems such that a pre-defined objective function is minimized.
PMBO is a novel blackbox that finds the minimum by fitting a surrogate to the objective function.
PMBO is benchmarked against other state-of-the-art algorithms for a given set of artificial, analytical functions.
arXiv Detail & Related papers (2023-09-01T14:11:03Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Batch Bayesian Optimization via Particle Gradient Flows [0.5735035463793008]
We show how to find global optima of objective functions which are only available as a black-box or are expensive to evaluate.
We construct a new function based on multipoint expected probability which is over the space of probability measures.
arXiv Detail & Related papers (2022-09-10T18:10:15Z) - Bayesian Optimization for Macro Placement [48.55456716632735]
We develop a novel approach to macro placement using Bayesian optimization (BO) over sequence pairs.
BO is a machine learning technique that uses a probabilistic surrogate model and an acquisition function.
We demonstrate our algorithm on the fixed-outline macro placement problem with the half-perimeter wire length objective.
arXiv Detail & Related papers (2022-07-18T06:17:06Z) - Pre-training helps Bayesian optimization too [49.28382118032923]
We seek an alternative practice for setting functional priors.
In particular, we consider the scenario where we have data from similar functions that allow us to pre-train a tighter distribution a priori.
Our results show that our method is able to locate good hyper parameters at least 3 times more efficiently than the best competing methods.
arXiv Detail & Related papers (2022-07-07T04:42:54Z) - MBORE: Multi-objective Bayesian Optimisation by Density-Ratio Estimation [0.01652719262940403]
optimisation problems often have multiple conflicting objectives that can be computationally and/or financially expensive.
Mono-surrogate Bayesian optimisation (BO) is a popular model-based approach for optimising such black-box functions.
We extend previous work on BO by density-ratio estimation (BORE) to the multi-objective setting.
arXiv Detail & Related papers (2022-03-31T09:27:59Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Resource Aware Multifidelity Active Learning for Efficient Optimization [0.8717253904965373]
This paper introduces the Resource Aware Active Learning (RAAL) strategy to accelerate the optimization of black box functions.
The RAAL strategy optimally seeds multiple points at each allowing for a major speed up of the optimization task.
arXiv Detail & Related papers (2020-07-09T10:01:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.