Margin theory for the scenario-based approach to robust optimization in
high dimension
- URL: http://arxiv.org/abs/2303.03891v1
- Date: Tue, 7 Mar 2023 13:33:46 GMT
- Title: Margin theory for the scenario-based approach to robust optimization in
high dimension
- Authors: Fabien Lauer (ABC)
- Abstract summary: This paper deals with the scenario approach to robust optimization.
This relies on a random sampling of the possibly infinite number of constraints induced by uncertainties in a problem.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper deals with the scenario approach to robust optimization. This
relies on a random sampling of the possibly infinite number of constraints
induced by uncertainties in the parameters of an optimization problem. Solving
the resulting random program yields a solution for which the quality is
measured in terms of the probability of violating the constraints for a random
value of the uncertainties, typically unseen before. Another central issue is
the determination of the sample complexity, i.e., the number of random
constraints (or scenarios) that one must consider in order to guarantee a
certain level of reliability. In this paper, we introduce the notion of margin
to improve upon standard results in this field. In particular, using tools from
statistical learning theory, we show that the sample complexity of a class of
random programs does not explicitly depend on the number of variables. In
addition, within the considered class, that includes polynomial constraints
among others, this result holds for both convex and nonconvex instances with
the same level of guarantees. We also derive a posteriori bounds on the
probability of violation and sketch a regularization approach that could be
used to improve the reliability of computed solutions on the basis of these
bounds.
Related papers
- Generalization Bounds of Surrogate Policies for Combinatorial Optimization Problems [61.580419063416734]
A recent stream of structured learning approaches has improved the practical state of the art for a range of optimization problems.
The key idea is to exploit the statistical distribution over instances instead of dealing with instances separately.
In this article, we investigate methods that smooth the risk by perturbing the policy, which eases optimization and improves the generalization error.
arXiv Detail & Related papers (2024-07-24T12:00:30Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Online Constraint Tightening in Stochastic Model Predictive Control: A
Regression Approach [49.056933332667114]
No analytical solutions exist for chance-constrained optimal control problems.
We propose a data-driven approach for learning the constraint-tightening parameters online during control.
Our approach yields constraint-tightening parameters that tightly satisfy the chance constraints.
arXiv Detail & Related papers (2023-10-04T16:22:02Z) - Optimizing Chance-Constrained Submodular Problems with Variable
Uncertainties [12.095075636344536]
We study chance-constrained submodular optimization problems, which capture a wide range of problems with constraints.
We present greedy algorithms that can obtain a high-quality solution, i.e., a constant approximation ratio to the given optimal solution.
arXiv Detail & Related papers (2023-09-23T04:48:49Z) - Optimal Learning via Moderate Deviations Theory [4.6930976245638245]
We develop a systematic construction of highly accurate confidence intervals by using a moderate deviation principle-based approach.
It is shown that the proposed confidence intervals are statistically optimal in the sense that they satisfy criteria regarding exponential accuracy, minimality, consistency, mischaracterization probability, and eventual uniformly most accurate (UMA) property.
arXiv Detail & Related papers (2023-05-23T19:57:57Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Bayesian sequential design of computer experiments for quantile set inversion [0.0]
We consider an unknown multivariate function representing a system-such as a complex numerical simulator.
Our objective is to estimate the set of deterministic inputs leading to outputs whose probability is less than a given threshold.
arXiv Detail & Related papers (2022-11-02T10:14:05Z) - Instance-Dependent Confidence and Early Stopping for Reinforcement
Learning [99.57168572237421]
Various algorithms for reinforcement learning (RL) exhibit dramatic variation in their convergence rates as a function of problem structure.
This research provides guarantees that explain textitex post the performance differences observed.
A natural next step is to convert these theoretical guarantees into guidelines that are useful in practice.
arXiv Detail & Related papers (2022-01-21T04:25:35Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - A sampling criterion for constrained Bayesian optimization with
uncertainties [0.0]
We consider the problem of chance constrained optimization where it is sought to optimize a function and satisfy constraints, both of which are affected by uncertainties.
To tackle such problems, we propose a new Bayesian optimization method.
It applies to the situation where the uncertainty comes from some of the inputs, so that it becomes possible to define an acquisition criterion in the joint controlled-uncontrolled input space.
arXiv Detail & Related papers (2021-03-09T20:35:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.