A unified surrogate-based scheme for black-box and preference-based
optimization
- URL: http://arxiv.org/abs/2202.01468v1
- Date: Thu, 3 Feb 2022 08:47:54 GMT
- Title: A unified surrogate-based scheme for black-box and preference-based
optimization
- Authors: Davide Previtali, Mirko Mazzoleni, Antonio Ferramosca, Fabio Previdi
- Abstract summary: We show that black-box and preference-based optimization problems are closely related and can be solved using the same family of approaches.
We propose the generalized Metric Response Surface (gMRS) algorithm, an optimization scheme that is a generalization of the popular MSRS framework.
- Score: 2.561649173827544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Black-box and preference-based optimization algorithms are global
optimization procedures that aim to find the global solutions of an
optimization problem using, respectively, the least amount of function
evaluations or sample comparisons as possible. In the black-box case, the
analytical expression of the objective function is unknown and it can only be
evaluated through a (costly) computer simulation or an experiment. In the
preference-based case, the objective function is still unknown but it
corresponds to the subjective criterion of an individual. So, it is not
possible to quantify such criterion in a reliable and consistent way.
Therefore, preference-based optimization algorithms seek global solutions using
only comparisons between couples of different samples, for which a human
decision-maker indicates which of the two is preferred. Quite often, the
black-box and preference-based frameworks are covered separately and are
handled using different techniques. In this paper, we show that black-box and
preference-based optimization problems are closely related and can be solved
using the same family of approaches, namely surrogate-based methods. Moreover,
we propose the generalized Metric Response Surface (gMRS) algorithm, an
optimization scheme that is a generalization of the popular MSRS framework.
Finally, we provide a convergence proof for the proposed optimization method.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Principled Preferential Bayesian Optimization [22.269732173306192]
We study the problem of preferential Bayesian optimization (BO)
We aim to optimize a black-box function with only preference feedback over a pair of candidate solutions.
An optimistic algorithm with an efficient computational method is then developed to solve the problem.
arXiv Detail & Related papers (2024-02-08T02:57:47Z) - Global Optimization: A Machine Learning Approach [7.052596485478637]
Bertsimas and Ozturk (2023) proposed OCTHaGOn as a way of solving black-box global optimization problems.
We provide extensions to this approach by approximating the original problem using other MIO-representable ML models.
We show improvements in solution feasibility and optimality in the majority of instances.
arXiv Detail & Related papers (2023-11-03T06:33:38Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - GPSAF: A Generalized Probabilistic Surrogate-Assisted Framework for
Constrained Single- and Multi-objective Optimization [7.8140593450932965]
This paper proposes a generalized probabilistic surrogate-assisted framework (GPSAF)
GPSAF is applicable to a broad category of unconstrained and constrained, single- and multi-objective optimization algorithms.
arXiv Detail & Related papers (2022-04-06T13:22:30Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - GLISp-r: A preference-based optimization algorithm with convergence
guarantees [2.517173388598129]
We propose an extension of a preference-based optimization procedure called GLISp-r.
In GLISp-r, we propose a different criterion to use when looking for new candidate samples that is inspired by MSRS.
Compared to GLISp, GLISp-r is less likely to get stuck on local optima of the preference-based optimization problem.
arXiv Detail & Related papers (2022-02-02T16:34:15Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.