Integrated Conditional Estimation-Optimization
- URL: http://arxiv.org/abs/2110.12351v4
- Date: Wed, 2 Aug 2023 03:20:46 GMT
- Title: Integrated Conditional Estimation-Optimization
- Authors: Meng Qi, Paul Grigas, Zuo-Jun Max Shen
- Abstract summary: Many real-world optimization problems uncertain parameters with probability can be estimated using contextual feature information.
In contrast to the standard approach of estimating the distribution of uncertain parameters, we propose an integrated conditional estimation approach.
We show that our ICEO approach is theally consistent under moderate conditions.
- Score: 6.037383467521294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many real-world optimization problems involve uncertain parameters with
probability distributions that can be estimated using contextual feature
information. In contrast to the standard approach of first estimating the
distribution of uncertain parameters and then optimizing the objective based on
the estimation, we propose an integrated conditional estimation-optimization
(ICEO) framework that estimates the underlying conditional distribution of the
random parameter while considering the structure of the optimization problem.
We directly model the relationship between the conditional distribution of the
random parameter and the contextual features, and then estimate the
probabilistic model with an objective that aligns with the downstream
optimization problem. We show that our ICEO approach is asymptotically
consistent under moderate regularity conditions and further provide finite
performance guarantees in the form of generalization bounds. Computationally,
performing estimation with the ICEO approach is a non-convex and often
non-differentiable optimization problem. We propose a general methodology for
approximating the potentially non-differentiable mapping from estimated
conditional distribution to the optimal decision by a differentiable function,
which greatly improves the performance of gradient-based algorithms applied to
the non-convex problem. We also provide a polynomial optimization solution
approach in the semi-algebraic case. Numerical experiments are also conducted
to show the empirical success of our approach in different situations including
with limited data samples and model mismatches.
Related papers
- Generalization Bounds of Surrogate Policies for Combinatorial Optimization Problems [61.580419063416734]
A recent stream of structured learning approaches has improved the practical state of the art for a range of optimization problems.
The key idea is to exploit the statistical distribution over instances instead of dealing with instances separately.
In this article, we investigate methods that smooth the risk by perturbing the policy, which eases optimization and improves the generalization error.
arXiv Detail & Related papers (2024-07-24T12:00:30Z) - Probabilistic Approach to Black-Box Binary Optimization with Budget Constraints: Application to Sensor Placement [0.0]
We present a fully probabilistic approach for solving binary optimization problems with black-box objective functions and with budget constraints.
In this work we develop conditional Bernoulli distributions to model the random variable conditioned by the total number of nonzero entries.
This approach is generally applicable to binary optimization problems with nonstochastic black-box objective functions and budget constraints.
arXiv Detail & Related papers (2024-06-09T15:37:28Z) - BO4IO: A Bayesian optimization approach to inverse optimization with uncertainty quantification [5.031974232392534]
This work addresses data-driven inverse optimization (IO)
The goal is to estimate unknown parameters in an optimization model from observed decisions that can be assumed to be optimal or near-optimal.
arXiv Detail & Related papers (2024-05-28T06:52:17Z) - Distributed Fractional Bayesian Learning for Adaptive Optimization [7.16937736207894]
This paper considers a distributed adaptive optimization problem, where all agents only have access to their local cost functions with a common unknown parameter.
We aim to provide valuable insights for addressing parameter uncertainty in distributed optimization problems and simultaneously find the optimal solution.
arXiv Detail & Related papers (2024-04-17T13:09:33Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Bayesian Nonparametrics Meets Data-Driven Distributionally Robust Optimization [29.24821214671497]
Training machine learning and statistical models often involve optimizing a data-driven risk criterion.
We propose a novel robust criterion by combining insights from Bayesian nonparametric (i.e., Dirichlet process) theory and a recent decision-theoretic model of smooth ambiguity-averse preferences.
For practical implementation, we propose and study tractable approximations of the criterion based on well-known Dirichlet process representations.
arXiv Detail & Related papers (2024-01-28T21:19:15Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Optimal Learning via Moderate Deviations Theory [4.6930976245638245]
We develop a systematic construction of highly accurate confidence intervals by using a moderate deviation principle-based approach.
It is shown that the proposed confidence intervals are statistically optimal in the sense that they satisfy criteria regarding exponential accuracy, minimality, consistency, mischaracterization probability, and eventual uniformly most accurate (UMA) property.
arXiv Detail & Related papers (2023-05-23T19:57:57Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.