Experience in Engineering Complex Systems: Active Preference Learning
with Multiple Outcomes and Certainty Levels
- URL: http://arxiv.org/abs/2302.14630v1
- Date: Mon, 27 Feb 2023 15:55:37 GMT
- Title: Experience in Engineering Complex Systems: Active Preference Learning
with Multiple Outcomes and Certainty Levels
- Authors: Le Anh Dao, Loris Roveda, Marco Maccarini, Matteo Lavit Nicora, Marta
Mondellini, Matteo Meregalli Falerni, Palaniappan Veerappan, Lorenzo
Mantovani, Dario Piga, Simone Formentin, Matteo Malosio
- Abstract summary: Black-box optimization refers to the problem whose objective function and/or constraint sets are either unknown, inaccessible, or non-existent.
The algorithm so-called Active Preference Learning has been developed to exploit this specific information.
Our approach aims to extend the algorithm in such a way that can exploit further information effectively.
- Score: 1.5257326975704795
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Black-box optimization refers to the optimization problem whose objective
function and/or constraint sets are either unknown, inaccessible, or
non-existent. In many applications, especially with the involvement of humans,
the only way to access the optimization problem is through performing physical
experiments with the available outcomes being the preference of one candidate
with respect to one or many others. Accordingly, the algorithm so-called Active
Preference Learning has been developed to exploit this specific information in
constructing a surrogate function based on standard radial basis functions, and
then forming an easy-to-solve acquisition function which repetitively suggests
new decision vectors to search for the optimal solution. Based on this idea,
our approach aims to extend the algorithm in such a way that can exploit
further information effectively, which can be obtained in reality such as:
5-point Likert type scale for the outcomes of the preference query (i.e., the
preference can be described in not only "this is better than that" but also
"this is much better than that" level), or multiple outcomes for a single
preference query with possible additive information on how certain the outcomes
are. The validation of the proposed algorithm is done through some standard
benchmark functions, showing a promising improvement with respect to the
state-of-the-art algorithm.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - DynamoRep: Trajectory-Based Population Dynamics for Classification of
Black-box Optimization Problems [0.755972004983746]
We propose a feature extraction method that describes the trajectories of optimization algorithms using simple statistics.
We demonstrate that the proposed DynamoRep features capture enough information to identify the problem class on which the optimization algorithm is running.
arXiv Detail & Related papers (2023-06-08T06:57:07Z) - Global and Preference-based Optimization with Mixed Variables using Piecewise Affine Surrogates [0.6083861980670925]
This paper proposes a novel surrogate-based global optimization algorithm to solve linearly constrained mixed-variable problems.
We assume the objective function is black-box and expensive-to-evaluate, while the linear constraints are quantifiable unrelaxable a priori known.
We introduce two types of exploration functions to efficiently search the feasible domain via mixed-integer linear programming solvers.
arXiv Detail & Related papers (2023-02-09T15:04:35Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach [46.457298683984924]
Bilevel optimization (BO) is useful for solving a variety important machine learning problems.
Conventional methods need to differentiate through the low-level optimization process with implicit differentiation.
First-order BO depends only on first-order information, requires no implicit differentiation.
arXiv Detail & Related papers (2022-09-19T01:51:12Z) - Offline Policy Optimization with Eligible Actions [34.4530766779594]
offline policy optimization could have a large impact on many real-world decision-making problems.
Importance sampling and its variants are a commonly used type of estimator in offline policy evaluation.
We propose an algorithm to avoid this overfitting through a new per-state-neighborhood normalization constraint.
arXiv Detail & Related papers (2022-07-01T19:18:15Z) - A unified surrogate-based scheme for black-box and preference-based
optimization [2.561649173827544]
We show that black-box and preference-based optimization problems are closely related and can be solved using the same family of approaches.
We propose the generalized Metric Response Surface (gMRS) algorithm, an optimization scheme that is a generalization of the popular MSRS framework.
arXiv Detail & Related papers (2022-02-03T08:47:54Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.