Data-Driven Combinatorial Optimization with Incomplete Information: a
Distributionally Robust Optimization Approach
- URL: http://arxiv.org/abs/2105.14139v1
- Date: Fri, 28 May 2021 23:17:35 GMT
- Title: Data-Driven Combinatorial Optimization with Incomplete Information: a
Distributionally Robust Optimization Approach
- Authors: Sergey S. Ketkov, Andrei S. Shilov, Oleg A. Prokopyev
- Abstract summary: We analyze linear optimization problems where the cost vector is not known a priori, but is only observable through a finite data set.
The goal is to find a procedure that transforms the data set into an estimate of the expected value of the objective function.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study we analyze linear combinatorial optimization problems where the
cost vector is not known a priori, but is only observable through a finite data
set. In contrast to the related studies, we presume that the number of
observations with respect to particular components of the cost vector may vary.
The goal is to find a procedure that transforms the data set into an estimate
of the expected value of the objective function (which is referred to as a
prediction rule) and a procedure that retrieves a candidate decision (which is
referred to as a prescription rule). We aim at finding the least conservative
prediction and prescription rules, which satisfy some specified asymptotic
guarantees. We demonstrate that the resulting vector optimization problems
admit a weakly optimal solution, which can be obtained by solving a particular
distributionally robust optimization problem. Specifically, the decision-maker
may optimize the worst-case expected loss across all probability distributions
with given component-wise relative entropy distances from the empirical
marginal distributions. Finally, we perform numerical experiments to analyze
the out-of-sample performance of the proposed solution approach.
Related papers
- Generalized Eigenvalue Problems with Generative Priors [23.711322973038897]
Generalized eigenvalue problems (GEPs) find applications in various fields of science and engineering.
We study GEPs under generative priors, assuming that the underlying leading generalized eigenvector lies within the range of a Lipschitz continuous generative model.
We propose an iterative algorithm called the Projected Rayleigh Flow Method (PRFM) to approximate the optimal solution.
arXiv Detail & Related papers (2024-11-02T18:16:07Z) - BO4IO: A Bayesian optimization approach to inverse optimization with uncertainty quantification [5.031974232392534]
This work addresses data-driven inverse optimization (IO)
The goal is to estimate unknown parameters in an optimization model from observed decisions that can be assumed to be optimal or near-optimal.
arXiv Detail & Related papers (2024-05-28T06:52:17Z) - Distributed Fractional Bayesian Learning for Adaptive Optimization [7.16937736207894]
This paper considers a distributed adaptive optimization problem, where all agents only have access to their local cost functions with a common unknown parameter.
We aim to provide valuable insights for addressing parameter uncertainty in distributed optimization problems and simultaneously find the optimal solution.
arXiv Detail & Related papers (2024-04-17T13:09:33Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Optimize-via-Predict: Realizing out-of-sample optimality in data-driven
optimization [0.0]
We examine a formulation for data-driven optimization wherein the decision-maker is not privy to the true distribution.
We define a prescriptive solution as a decisionvendor rule mapping such a data set to decisions.
We present an optimization problem that would solve for such an out-of-sample optimal solution, and does so efficiently by a combination of sampling and bisection search algorithms.
arXiv Detail & Related papers (2023-09-20T08:48:50Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Optimal Algorithms for Mean Estimation under Local Differential Privacy [55.32262879188817]
We show that PrivUnit achieves the optimal variance among a large family of locally private randomizers.
We also develop a new variant of PrivUnit based on the Gaussian distribution which is more amenable to mathematical analysis and enjoys the same optimality guarantees.
arXiv Detail & Related papers (2022-05-05T06:43:46Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Integrated Conditional Estimation-Optimization [6.037383467521294]
Many real-world optimization problems uncertain parameters with probability can be estimated using contextual feature information.
In contrast to the standard approach of estimating the distribution of uncertain parameters, we propose an integrated conditional estimation approach.
We show that our ICEO approach is theally consistent under moderate conditions.
arXiv Detail & Related papers (2021-10-24T04:49:35Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Fast Rates for Contextual Linear Optimization [52.39202699484225]
We show that a naive plug-in approach achieves regret convergence rates that are significantly faster than methods that directly optimize downstream decision performance.
Our results are overall positive for practice: predictive models are easy and fast to train using existing tools, simple to interpret, and, as we show, lead to decisions that perform very well.
arXiv Detail & Related papers (2020-11-05T18:43:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.