Choice functions based multi-objective Bayesian optimisation
- URL: http://arxiv.org/abs/2110.08217v1
- Date: Fri, 15 Oct 2021 17:24:03 GMT
- Title: Choice functions based multi-objective Bayesian optimisation
- Authors: Alessio Benavoli and Dario Azzimonti and Dario Piga
- Abstract summary: We introduce a new framework for multi-objective Bayesian optimisation where the multi-objective functions can only be accessed via choice judgements.
By placing a Gaussian process prior on f and deriving a novel likelihood model for choice data, we propose a Bayesian framework for choice functions learning.
- Score: 1.0742675209112622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we introduce a new framework for multi-objective Bayesian
optimisation where the multi-objective functions can only be accessed via
choice judgements, such as ``I pick options A,B,C among this set of five
options A,B,C,D,E''. The fact that the option D is rejected means that there is
at least one option among the selected ones A,B,C that I strictly prefer over D
(but I do not have to specify which one). We assume that there is a latent
vector function f for some dimension $n_e$ which embeds the options into the
real vector space of dimension n, so that the choice set can be represented
through a Pareto set of non-dominated options. By placing a Gaussian process
prior on f and deriving a novel likelihood model for choice data, we propose a
Bayesian framework for choice functions learning. We then apply this surrogate
model to solve a novel multi-objective Bayesian optimisation from choice data
problem.
Related papers
- Large Language Models Are Not Robust Multiple Choice Selectors [117.72712117510953]
Multiple choice questions (MCQs) serve as a common yet important task format in the evaluation of large language models (LLMs)
This work shows that modern LLMs are vulnerable to option position changes due to their inherent "selection bias"
We propose a label-free, inference-time debiasing method, called PriDe, which separates the model's prior bias for option IDs from the overall prediction distribution.
arXiv Detail & Related papers (2023-09-07T17:44:56Z) - Predictive Modeling through Hyper-Bayesian Optimization [60.586813904500595]
We propose a novel way of integrating model selection and BO for the single goal of reaching the function optima faster.
The algorithm moves back and forth between BO in the model space and BO in the function space, where the goodness of the recommended model is captured.
In addition to improved sample efficiency, the framework outputs information about the black-box function.
arXiv Detail & Related papers (2023-08-01T04:46:58Z) - Finding Optimal Diverse Feature Sets with Alternative Feature Selection [0.0]
We introduce alternative feature selection and formalize it as an optimization problem.
In particular, we define alternatives via constraints and enable users to control the number and dissimilarity of alternatives.
We show that a constant-factor approximation exists under certain conditions and propose corresponding search methods.
arXiv Detail & Related papers (2023-07-21T14:23:41Z) - Learning Choice Functions with Gaussian Processes [0.225596179391365]
In consumer theory, ranking available objects by means of preference relations yields the most common description of individual choices.
We propose a choice-model which allows an individual to express a set-valued choice.
arXiv Detail & Related papers (2023-02-01T12:46:43Z) - A General Recipe for Likelihood-free Bayesian Optimization [115.82591413062546]
We propose likelihood-free BO (LFBO) to extend BO to a broader class of models and utilities.
LFBO directly models the acquisition function without having to separately perform inference with a probabilistic surrogate model.
We show that computing the acquisition function in LFBO can be reduced to optimizing a weighted classification problem.
arXiv Detail & Related papers (2022-06-27T03:55:27Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - R-MBO: A Multi-surrogate Approach for Preference Incorporation in
Multi-objective Bayesian Optimisation [0.0]
We present an a-priori multi-surrogate approach to incorporate the desirable objective function values as the preferences of a decision-maker in multi-objective BO.
The results and comparison with the existing mono-surrogate approach on benchmark and real-world optimisation problems show the potential of the proposed approach.
arXiv Detail & Related papers (2022-04-27T19:58:26Z) - Decision-making with E-admissibility given a finite assessment of
choices [64.29961886833972]
We study the implications for decision-making with E-admissibility.
We use the mathematical framework of choice functions to specify choices and rejections.
We provide an algorithm that computes this extension by solving linear feasibility problems.
arXiv Detail & Related papers (2022-04-15T11:46:00Z) - Bayesian preference elicitation for multiobjective combinatorial
optimization [12.96855751244076]
We introduce a new incremental preference elicitation procedure able to deal with noisy responses of a Decision Maker (DM)
We assume that the preferences of the DM are represented by an aggregation function whose parameters are unknown and that the uncertainty about them is represented by a density function on the parameter space.
arXiv Detail & Related papers (2020-07-29T12:28:37Z) - Learning Choice Functions via Pareto-Embeddings [3.1410342959104725]
We consider the problem of learning to choose from a given set of objects, where each object is represented by a feature vector.
We propose a learning algorithm that minimizes a differentiable loss function suitable for this task.
arXiv Detail & Related papers (2020-07-14T09:34:44Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.