One Step Preference Elicitation in Multi-Objective Bayesian Optimization
- URL: http://arxiv.org/abs/2105.13278v1
- Date: Thu, 27 May 2021 16:23:29 GMT
- Title: One Step Preference Elicitation in Multi-Objective Bayesian Optimization
- Authors: Juan Ungredda, Mariapia Marchi, Teresa Montrone and Juergen Branke
- Abstract summary: We consider a multi-objective optimization problem with objective functions that are expensive to evaluate.
We show that the true best solution according to the DM's unknown preferences is unlikely to be among the small set of non-dominated solutions found.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a multi-objective optimization problem with objective functions
that are expensive to evaluate. The decision maker (DM) has unknown
preferences, and so the standard approach is to generate an approximation of
the Pareto front and let the DM choose from the generated non-dominated
designs. However, especially for expensive to evaluate problems where the
number of designs that can be evaluated is very limited, the true best solution
according to the DM's unknown preferences is unlikely to be among the small set
of non-dominated solutions found, even if these solutions are truly Pareto
optimal. We address this issue by using a multi-objective Bayesian optimization
algorithm and allowing the DM to select a preferred solution from a predicted
continuous Pareto front just once before the end of the algorithm rather than
selecting a solution after the end. This allows the algorithm to understand the
DM's preferences and make a final attempt to identify a more preferred
solution. We demonstrate the idea using ParEGO, and show empirically that the
found solutions are significantly better in terms of true DM preferences than
if the DM would simply pick a solution at the end.
Related papers
- Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Preference-Optimized Pareto Set Learning for Blackbox Optimization [1.9628841617148691]
No single solution exists that can optimize all the objectives simultaneously.
In a typical MOO problem, the goal is to find a set of optimum solutions (Pareto set) that trades off the preferences among objectives.
Our formulation leads to a bilevel optimization problem that can be solved by e.g. differentiable cross-entropy methods.
arXiv Detail & Related papers (2024-08-19T13:23:07Z) - mDPO: Conditional Preference Optimization for Multimodal Large Language Models [52.607764280030196]
Direct preference optimization (DPO) has shown to be an effective method for large language model (LLM) alignment.
Recent works have attempted to apply DPO to multimodal scenarios but have found it challenging to achieve consistent improvement.
We propose mDPO, a multimodal DPO objective that prevents the over-prioritization of language-only preferences by also optimizing image preference.
arXiv Detail & Related papers (2024-06-17T17:59:58Z) - Data-Efficient Interactive Multi-Objective Optimization Using ParEGO [6.042269506496206]
Multi-objective optimization seeks to identify a set of non-dominated solutions that provide optimal trade-offs among competing objectives.
In practical applications, decision-makers (DMs) will select a single solution that aligns with their preferences to be implemented.
We propose two novel algorithms that efficiently locate the most preferred region of the Pareto front in expensive-to-evaluate problems.
arXiv Detail & Related papers (2024-01-12T15:55:51Z) - Multi-Objective Bayesian Optimization with Active Preference Learning [18.066263838953223]
We propose a Bayesian optimization (BO) approach to identifying the most preferred solution in a multi-objective optimization (MOO) problem.
To minimize the interaction cost with the decision maker (DM), we also propose an active learning strategy for the preference estimation.
arXiv Detail & Related papers (2023-11-22T15:24:36Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Solution Subset Selection for Final Decision Making in Evolutionary
Multi-Objective Optimization [7.745468825770201]
We discuss subset selection from a viewpoint of the final decision making.
We show that the formulated function is the same as the IGD plus indicator.
arXiv Detail & Related papers (2020-06-15T06:26:58Z) - Ensuring smoothly navigable approximation sets by Bezier curve
parameterizations in evolutionary bi-objective optimization -- applied to
brachytherapy treatment planning for prostate cancer [0.0]
We study the case of parameterizing approximation sets as smooth Bezier curves in decision space.
We show that high-quality approximation sets can be obtained with BezEA, sometimes even outperforming the domination- and UHV-based algorithms.
arXiv Detail & Related papers (2020-06-11T13:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.