R-MBO: A Multi-surrogate Approach for Preference Incorporation in
Multi-objective Bayesian Optimisation
- URL: http://arxiv.org/abs/2204.13166v1
- Date: Wed, 27 Apr 2022 19:58:26 GMT
- Title: R-MBO: A Multi-surrogate Approach for Preference Incorporation in
Multi-objective Bayesian Optimisation
- Authors: Tinkle Chugh
- Abstract summary: We present an a-priori multi-surrogate approach to incorporate the desirable objective function values as the preferences of a decision-maker in multi-objective BO.
The results and comparison with the existing mono-surrogate approach on benchmark and real-world optimisation problems show the potential of the proposed approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Many real-world multi-objective optimisation problems rely on computationally
expensive function evaluations. Multi-objective Bayesian optimisation (BO) can
be used to alleviate the computation time to find an approximated set of Pareto
optimal solutions. In many real-world problems, a decision-maker has some
preferences on the objective functions. One approach to incorporate the
preferences in multi-objective BO is to use a scalarising function and build a
single surrogate model (mono-surrogate approach) on it. This approach has two
major limitations. Firstly, the fitness landscape of the scalarising function
and the objective functions may not be similar. Secondly, the approach assumes
that the scalarising function distribution is Gaussian, and thus a closed-form
expression of an acquisition function e.g., expected improvement can be used.
We overcome these limitations by building independent surrogate models
(multi-surrogate approach) on each objective function and show that the
distribution of the scalarising function is not Gaussian. We approximate the
distribution using Generalised value distribution. We present an a-priori
multi-surrogate approach to incorporate the desirable objective function values
(or reference point) as the preferences of a decision-maker in multi-objective
BO. The results and comparison with the existing mono-surrogate approach on
benchmark and real-world optimisation problems show the potential of the
proposed approach.
Related papers
- Federated Communication-Efficient Multi-Objective Optimization [27.492821176616815]
We propose FedCMOO, a novel communication- federated multiobjective (FMOO) algorithm that improves the error convergence performance of the model compared to existing approaches.
In addition, we introduce a variant of FedCMOO that allows users to specify a gradient over the objectives in terms of a desired ratio of the final objective values.
arXiv Detail & Related papers (2024-10-21T18:09:22Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Joint Entropy Search for Multi-objective Bayesian Optimization [0.0]
We propose a novel information-theoretic acquisition function for BO called Joint Entropy Search.
We showcase the effectiveness of this new approach on a range of synthetic and real-world problems in terms of the hypervolume and its weighted variants.
arXiv Detail & Related papers (2022-10-06T13:19:08Z) - Batch Bayesian Optimization via Particle Gradient Flows [0.5735035463793008]
We show how to find global optima of objective functions which are only available as a black-box or are expensive to evaluate.
We construct a new function based on multipoint expected probability which is over the space of probability measures.
arXiv Detail & Related papers (2022-09-10T18:10:15Z) - A General Recipe for Likelihood-free Bayesian Optimization [115.82591413062546]
We propose likelihood-free BO (LFBO) to extend BO to a broader class of models and utilities.
LFBO directly models the acquisition function without having to separately perform inference with a probabilistic surrogate model.
We show that computing the acquisition function in LFBO can be reduced to optimizing a weighted classification problem.
arXiv Detail & Related papers (2022-06-27T03:55:27Z) - Mono-surrogate vs Multi-surrogate in Multi-objective Bayesian
Optimisation [0.0]
We build a surrogate model for each objective function and show that the scalarising function distribution is not Gaussian.
Results and comparison with existing approaches on standard benchmark and real-world optimisation problems show the potential of the multi-surrogate approach.
arXiv Detail & Related papers (2022-05-02T09:25:04Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - A Multi-Agent Primal-Dual Strategy for Composite Optimization over
Distributed Features [52.856801164425086]
We study multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) coupling function.
arXiv Detail & Related papers (2020-06-15T19:40:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.