Multi-Objective Bayesian Optimization over High-Dimensional Search
Spaces
- URL: http://arxiv.org/abs/2109.10964v1
- Date: Wed, 22 Sep 2021 18:30:07 GMT
- Title: Multi-Objective Bayesian Optimization over High-Dimensional Search
Spaces
- Authors: Samuel Daulton, David Eriksson, Maximilian Balandat, Eytan Bakshy
- Abstract summary: MORBO is a method for multi-objective Bayesian optimization over high-dimensional search spaces.
We show that MORBO significantly advances the state-of-the-art in sample-efficiency for several high-dimensional synthetic and real-world multi-objective problems.
- Score: 16.368143857907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to optimize multiple competing objective functions with high
sample efficiency is imperative in many applied problems across science and
industry. Multi-objective Bayesian optimization (BO) achieves strong empirical
performance on such problems, but even with recent methodological advances, it
has been restricted to simple, low-dimensional domains. Most existing BO
methods exhibit poor performance on search spaces with more than a few dozen
parameters. In this work we propose MORBO, a method for multi-objective
Bayesian optimization over high-dimensional search spaces. MORBO performs local
Bayesian optimization within multiple trust regions simultaneously, allowing it
to explore and identify diverse solutions even when the objective functions are
difficult to model globally. We show that MORBO significantly advances the
state-of-the-art in sample-efficiency for several high-dimensional synthetic
and real-world multi-objective problems, including a vehicle design problem
with 222 parameters, demonstrating that MORBO is a practical approach for
challenging and important problems that were previously out of reach for BO
methods.
Related papers
- UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Real-Time Image Segmentation via Hybrid Convolutional-Transformer Architecture Search [49.81353382211113]
We address the challenge of integrating multi-head self-attention into high resolution representation CNNs efficiently.
We develop a multi-target multi-branch supernet method, which fully utilizes the advantages of high-resolution features.
We present a series of model via Hybrid Convolutional-Transformer Architecture Search (HyCTAS) method that searched for the best hybrid combination of light-weight convolution layers and memory-efficient self-attention layers.
arXiv Detail & Related papers (2024-03-15T15:47:54Z) - Large Language Models to Enhance Bayesian Optimization [57.474613739645605]
We present LLAMBO, a novel approach that integrates the capabilities of Large Language Models (LLM) within Bayesian optimization.
At a high level, we frame the BO problem in natural language, enabling LLMs to iteratively propose and evaluate promising solutions conditioned on historical evaluations.
Our findings illustrate that LLAMBO is effective at zero-shot warmstarting, and enhances surrogate modeling and candidate sampling, especially in the early stages of search when observations are sparse.
arXiv Detail & Related papers (2024-02-06T11:44:06Z) - Scalable Bayesian optimization with high-dimensional outputs using
randomized prior networks [3.0468934705223774]
We propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors.
We show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces.
We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs.
arXiv Detail & Related papers (2023-02-14T18:55:21Z) - Discovering Many Diverse Solutions with Bayesian Optimization [7.136022698519586]
We propose Rank-Ordered Bayesian Optimization with Trust-regions (ROBOT)
ROBOT aims to find a portfolio of high-performing solutions that are diverse according to a user-specified diversity metric.
We show that it can discover large sets of high-performing diverse solutions while requiring few additional function evaluations.
arXiv Detail & Related papers (2022-10-20T01:56:38Z) - Joint Entropy Search for Multi-objective Bayesian Optimization [0.0]
We propose a novel information-theoretic acquisition function for BO called Joint Entropy Search.
We showcase the effectiveness of this new approach on a range of synthetic and real-world problems in terms of the hypervolume and its weighted variants.
arXiv Detail & Related papers (2022-10-06T13:19:08Z) - Multi-Objective Quality Diversity Optimization [2.4608515808275455]
We propose an extension of the MAP-Elites algorithm in the multi-objective setting: Multi-Objective MAP-Elites (MOME)
Namely, it combines the diversity inherited from the MAP-Elites grid algorithm with the strength of multi-objective optimizations.
We evaluate our method on several tasks, from standard optimization problems to robotics simulations.
arXiv Detail & Related papers (2022-02-07T10:48:28Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Fighting the curse of dimensionality: A machine learning approach to
finding global optima [77.34726150561087]
This paper shows how to find global optima in structural optimization problems.
By exploiting certain cost functions we either obtain the global at best or obtain superior results at worst when compared to established optimization procedures.
arXiv Detail & Related papers (2021-10-28T09:50:29Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - Empirical Study on the Benefits of Multiobjectivization for Solving
Single-Objective Problems [0.0]
Local optima are often preventing algorithms from making progress and thus pose a severe threat.
With the use of a sophisticated visualization technique based on the multi-objective gradients, the properties of the arising multi-objective landscapes are illustrated and examined.
We will empirically show that the multi-objective COCO MOGSA is able to exploit these properties to overcome local traps.
arXiv Detail & Related papers (2020-06-25T14:04:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.