Optimize-via-Predict: Realizing out-of-sample optimality in data-driven
optimization
- URL: http://arxiv.org/abs/2309.11147v1
- Date: Wed, 20 Sep 2023 08:48:50 GMT
- Title: Optimize-via-Predict: Realizing out-of-sample optimality in data-driven
optimization
- Authors: Gar Goei Loke, Taozeng Zhu, Ruiting Zuo
- Abstract summary: We examine a formulation for data-driven optimization wherein the decision-maker is not privy to the true distribution.
We define a prescriptive solution as a decisionvendor rule mapping such a data set to decisions.
We present an optimization problem that would solve for such an out-of-sample optimal solution, and does so efficiently by a combination of sampling and bisection search algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We examine a stochastic formulation for data-driven optimization wherein the
decision-maker is not privy to the true distribution, but has knowledge that it
lies in some hypothesis set and possesses a historical data set, from which
information about it can be gleaned. We define a prescriptive solution as a
decision rule mapping such a data set to decisions. As there does not exist
prescriptive solutions that are generalizable over the entire hypothesis set,
we define out-of-sample optimality as a local average over a neighbourhood of
hypotheses, and averaged over the sampling distribution. We prove sufficient
conditions for local out-of-sample optimality, which reduces to functions of
the sufficient statistic of the hypothesis family. We present an optimization
problem that would solve for such an out-of-sample optimal solution, and does
so efficiently by a combination of sampling and bisection search algorithms.
Finally, we illustrate our model on the newsvendor model, and find strong
performance when compared against alternatives in the literature. There are
potential implications of our research on end-to-end learning and Bayesian
optimization.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Distributed Fractional Bayesian Learning for Adaptive Optimization [7.16937736207894]
This paper considers a distributed adaptive optimization problem, where all agents only have access to their local cost functions with a common unknown parameter.
We aim to provide valuable insights for addressing parameter uncertainty in distributed optimization problems and simultaneously find the optimal solution.
arXiv Detail & Related papers (2024-04-17T13:09:33Z) - Towards Efficient Exact Optimization of Language Model Alignment [93.39181634597877]
Direct preference optimization (DPO) was proposed to directly optimize the policy from preference data.
We show that DPO derived based on the optimal solution of problem leads to a compromised mean-seeking approximation of the optimal solution in practice.
We propose efficient exact optimization (EXO) of the alignment objective.
arXiv Detail & Related papers (2024-02-01T18:51:54Z) - Bayesian Optimization with Conformal Prediction Sets [44.565812181545645]
Conformal prediction is an uncertainty quantification method with coverage guarantees even for misspecified models.
We propose conformal Bayesian optimization, which directs queries towards regions of search space where the model predictions have guaranteed validity.
In many cases we find that query coverage can be significantly improved without harming sample-efficiency.
arXiv Detail & Related papers (2022-10-22T17:01:05Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Bayesian Joint Chance Constrained Optimization: Approximations and
Statistical Consistency [10.20554144865699]
We focus on the question of statistical consistency of the optimal value, computed using an approximate posterior distribution.
We also prove the feasibility of the approximate optimization problem.
We also demonstrate the utility of our approach on an optimal staffing problem for an M/M/c queueing model.
arXiv Detail & Related papers (2021-06-23T07:11:39Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Data-Driven Combinatorial Optimization with Incomplete Information: a
Distributionally Robust Optimization Approach [0.0]
We analyze linear optimization problems where the cost vector is not known a priori, but is only observable through a finite data set.
The goal is to find a procedure that transforms the data set into an estimate of the expected value of the objective function.
arXiv Detail & Related papers (2021-05-28T23:17:35Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Stochastic Optimization Forests [60.523606291705214]
We show how to train forest decision policies by growing trees that choose splits to directly optimize the downstream decision quality, rather than splitting to improve prediction accuracy as in the standard random forest algorithm.
We show that our approximate splitting criteria can reduce running time hundredfold, while achieving performance close to forest algorithms that exactly re-optimize for every candidate split.
arXiv Detail & Related papers (2020-08-17T16:56:06Z) - Variance Reduction for Better Sampling in Continuous Domains [5.675136204504504]
We show that the optimal search distribution might be more peaked around the center of the distribution than the prior distribution.
We provide explicit values for this reshaping of the search distribution depending on the population size.
arXiv Detail & Related papers (2020-04-24T12:25:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.