qPOTS: Efficient batch multiobjective Bayesian optimization via Pareto
optimal Thompson sampling
- URL: http://arxiv.org/abs/2310.15788v1
- Date: Tue, 24 Oct 2023 12:35:15 GMT
- Title: qPOTS: Efficient batch multiobjective Bayesian optimization via Pareto
optimal Thompson sampling
- Authors: S. Ashwin Renganathan
- Abstract summary: A sample-efficient approach to solving multiobjective optimization is via process oracle (GP) surrogates.
We propose a simple, but effective, Thompson sampling based approach where new candidate(s) are chosen from the frontier of random GP sample.
Our approach demonstrates strong empirical performance over the state of the art, both in terms of accuracy and computational efficiency, on synthetic as well as real-world experiments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classical evolutionary approaches for multiobjective optimization are quite
effective but incur a lot of queries to the objectives; this can be prohibitive
when objectives are expensive oracles. A sample-efficient approach to solving
multiobjective optimization is via Gaussian process (GP) surrogates and
Bayesian optimization (BO). Multiobjective Bayesian optimization (MOBO)
involves the construction of an acquisition function which is optimized to
acquire new observation candidates. This ``inner'' optimization can be hard due
to various reasons: acquisition functions being nonconvex, nondifferentiable
and/or unavailable in analytical form; the success of MOBO heavily relies on
this inner optimization. We do away with this hard acquisition function
optimization step and propose a simple, but effective, Thompson sampling based
approach ($q\texttt{POTS}$) where new candidate(s) are chosen from the Pareto
frontier of random GP posterior sample paths obtained by solving a much cheaper
multiobjective optimization problem. To further improve computational
tractability in higher dimensions we propose an automated active set of
candidates selection combined with a Nystr\"{o}m approximation. Our approach
applies to arbitrary GP prior assumptions and demonstrates strong empirical
performance over the state of the art, both in terms of accuracy and
computational efficiency, on synthetic as well as real-world experiments.
Related papers
- End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Optimistic Optimization of Gaussian Process Samples [30.226274682578172]
A competing, computationally more efficient, global optimization framework is optimistic optimization, which exploits prior knowledge about the geometry of the search space in form of a dissimilarity function.
We argue that there is a new research domain between geometric and probabilistic search, i.e. methods that run drastically faster than traditional Bayesian optimization, while retaining some of the crucial functionality of Bayesian optimization.
arXiv Detail & Related papers (2022-09-02T09:06:24Z) - Dynamic Multi-objective Ensemble of Acquisition Functions in Batch
Bayesian Optimization [1.1602089225841632]
The acquisition function plays a crucial role in the optimization process.
Three acquisition functions are dynamically selected from a set based on their current and historical performance.
Using an evolutionary multi-objective algorithm to optimize such a MOP, a set of non-dominated solutions can be obtained.
arXiv Detail & Related papers (2022-06-22T14:09:18Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - $\{\text{PF}\}^2\text{ES}$: Parallel Feasible Pareto Frontier Entropy
Search for Multi-Objective Bayesian Optimization Under Unknown Constraints [4.672142224503371]
We present a novel information-theoretic acquisition function for multi-objective Bayesian optimization.
$textPF2$ES provides a low cost and accurate estimate of the mutual information for the parallel setting.
We benchmark $textPF2$ES across synthetic and real-life problems.
arXiv Detail & Related papers (2022-04-11T21:06:23Z) - Bayesian Optimization with High-Dimensional Outputs [42.311308135418805]
In practice, we often wish to optimize objectives defined over many correlated outcomes (or tasks)
We devise an efficient technique for exact multi-task GP sampling that combines exploiting Kronecker structure in the covariance matrices with Matheron's identity.
We demonstrate how this unlocks a new class of applications for Bayesian Optimization across a range of tasks in science and engineering.
arXiv Detail & Related papers (2021-06-24T13:15:12Z) - Resource Aware Multifidelity Active Learning for Efficient Optimization [0.8717253904965373]
This paper introduces the Resource Aware Active Learning (RAAL) strategy to accelerate the optimization of black box functions.
The RAAL strategy optimally seeds multiple points at each allowing for a major speed up of the optimization task.
arXiv Detail & Related papers (2020-07-09T10:01:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.