Rockafellian Relaxation and Stochastic Optimization under Perturbations
- URL: http://arxiv.org/abs/2204.04762v4
- Date: Mon, 20 Nov 2023 19:20:57 GMT
- Title: Rockafellian Relaxation and Stochastic Optimization under Perturbations
- Authors: Johannes O. Royset, Louis L. Chen, and Eric Eckstrand
- Abstract summary: We develop an optimistic" framework based on Rockafellian relaxations in which optimization is conducted not only over the original decision space but also jointly with a choice of model.
The framework centers on the novel concepts of exact and limit-exact Rockafellians, with interpretations of negative'' regularization emerging in certain settings.
- Score: 0.056247917037481096
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In practice, optimization models are often prone to unavoidable inaccuracies
due to dubious assumptions and corrupted data. Traditionally, this placed
special emphasis on risk-based and robust formulations, and their focus on
``conservative" decisions. We develop, in contrast, an ``optimistic" framework
based on Rockafellian relaxations in which optimization is conducted not only
over the original decision space but also jointly with a choice of model
perturbation. The framework enables us to address challenging problems with
ambiguous probability distributions from the areas of two-stage stochastic
optimization without relatively complete recourse, probability functions
lacking continuity properties, expectation constraints, and outlier analysis.
We are also able to circumvent the fundamental difficulty in stochastic
optimization that convergence of distributions fails to guarantee convergence
of expectations. The framework centers on the novel concepts of exact and
limit-exact Rockafellians, with interpretations of ``negative'' regularization
emerging in certain settings. We illustrate the role of Phi-divergence, examine
rates of convergence under changing distributions, and explore extensions to
first-order optimality conditions. The main development is free of assumptions
about convexity, smoothness, and even continuity of objective functions.
Numerical results in the setting of computer vision and text analytics with
label noise illustrate the framework.
Related papers
- Probabilistic Approach to Black-Box Binary Optimization with Budget Constraints: Application to Sensor Placement [0.0]
We present a fully probabilistic approach for solving binary optimization problems with black-box objective functions and with budget constraints.
In this work we develop conditional Bernoulli distributions to model the random variable conditioned by the total number of nonzero entries.
This approach is generally applicable to binary optimization problems with nonstochastic black-box objective functions and budget constraints.
arXiv Detail & Related papers (2024-06-09T15:37:28Z) - Non-Convex Robust Hypothesis Testing using Sinkhorn Uncertainty Sets [18.46110328123008]
We present a new framework to address the non-robust hypothesis testing problem.
The goal is to seek the optimal detector that minimizes the maximum numerical risk.
arXiv Detail & Related papers (2024-03-21T20:29:43Z) - An Inexact Halpern Iteration with Application to Distributionally Robust
Optimization [9.529117276663431]
We investigate the inexact variants of the scheme in both deterministic and deterministic convergence settings.
We show that by choosing the inexactness appropriately, the inexact schemes admit an $O(k-1) convergence rate in terms of the (expected) residue norm.
arXiv Detail & Related papers (2024-02-08T20:12:47Z) - Bayesian Nonparametrics Meets Data-Driven Distributionally Robust Optimization [29.24821214671497]
Training machine learning and statistical models often involve optimizing a data-driven risk criterion.
We propose a novel robust criterion by combining insights from Bayesian nonparametric (i.e., Dirichlet process) theory and a recent decision-theoretic model of smooth ambiguity-averse preferences.
For practical implementation, we propose and study tractable approximations of the criterion based on well-known Dirichlet process representations.
arXiv Detail & Related papers (2024-01-28T21:19:15Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Distributed Sketching for Randomized Optimization: Exact
Characterization, Concentration and Lower Bounds [54.51566432934556]
We consider distributed optimization methods for problems where forming the Hessian is computationally challenging.
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
arXiv Detail & Related papers (2022-03-18T05:49:13Z) - Integrated Conditional Estimation-Optimization [6.037383467521294]
Many real-world optimization problems uncertain parameters with probability can be estimated using contextual feature information.
In contrast to the standard approach of estimating the distribution of uncertain parameters, we propose an integrated conditional estimation approach.
We show that our ICEO approach is theally consistent under moderate conditions.
arXiv Detail & Related papers (2021-10-24T04:49:35Z) - Optimal Rates for Random Order Online Optimization [60.011653053877126]
We study the citetgarber 2020online, where the loss functions may be chosen by an adversary, but are then presented online in a uniformly random order.
We show that citetgarber 2020online algorithms achieve the optimal bounds and significantly improve their stability.
arXiv Detail & Related papers (2021-06-29T09:48:46Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.