A sampling criterion for constrained Bayesian optimization with
uncertainties
- URL: http://arxiv.org/abs/2103.05706v4
- Date: Mon, 11 Dec 2023 17:58:48 GMT
- Title: A sampling criterion for constrained Bayesian optimization with
uncertainties
- Authors: Reda El Amri, Rodolphe Le Riche, C\'eline Helbert, Christophette
Blanchet-Scalliet, S\'ebastien Da Veiga
- Abstract summary: We consider the problem of chance constrained optimization where it is sought to optimize a function and satisfy constraints, both of which are affected by uncertainties.
To tackle such problems, we propose a new Bayesian optimization method.
It applies to the situation where the uncertainty comes from some of the inputs, so that it becomes possible to define an acquisition criterion in the joint controlled-uncontrolled input space.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of chance constrained optimization where it is sought
to optimize a function and satisfy constraints, both of which are affected by
uncertainties. The real world declinations of this problem are particularly
challenging because of their inherent computational cost.
To tackle such problems, we propose a new Bayesian optimization method. It
applies to the situation where the uncertainty comes from some of the inputs,
so that it becomes possible to define an acquisition criterion in the joint
controlled-uncontrolled input space. The main contribution of this work is an
acquisition criterion that accounts for both the average improvement in
objective function and the constraint reliability. The criterion is derived
following the Stepwise Uncertainty Reduction logic and its maximization
provides both optimal controlled and uncontrolled parameters. Analytical
expressions are given to efficiently calculate the criterion. Numerical studies
on test functions are presented. It is found through experimental comparisons
with alternative sampling criteria that the adequation between the sampling
criterion and the problem contributes to the efficiency of the overall
optimization. As a side result, an expression for the variance of the
improvement is given.
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - An Inexact Halpern Iteration with Application to Distributionally Robust
Optimization [9.529117276663431]
We investigate the inexact variants of the scheme in both deterministic and deterministic convergence settings.
We show that by choosing the inexactness appropriately, the inexact schemes admit an $O(k-1) convergence rate in terms of the (expected) residue norm.
arXiv Detail & Related papers (2024-02-08T20:12:47Z) - Optimizing Chance-Constrained Submodular Problems with Variable
Uncertainties [12.095075636344536]
We study chance-constrained submodular optimization problems, which capture a wide range of problems with constraints.
We present greedy algorithms that can obtain a high-quality solution, i.e., a constant approximation ratio to the given optimal solution.
arXiv Detail & Related papers (2023-09-23T04:48:49Z) - Margin theory for the scenario-based approach to robust optimization in
high dimension [0.0]
This paper deals with the scenario approach to robust optimization.
This relies on a random sampling of the possibly infinite number of constraints induced by uncertainties in a problem.
arXiv Detail & Related papers (2023-03-07T13:33:46Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Integrated Conditional Estimation-Optimization [6.037383467521294]
Many real-world optimization problems uncertain parameters with probability can be estimated using contextual feature information.
In contrast to the standard approach of estimating the distribution of uncertain parameters, we propose an integrated conditional estimation approach.
We show that our ICEO approach is theally consistent under moderate conditions.
arXiv Detail & Related papers (2021-10-24T04:49:35Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Implicit Rate-Constrained Optimization of Non-decomposable Objectives [37.43791617018009]
We consider a family of constrained optimization problems arising in machine learning.
Our key idea is to formulate a rate-constrained optimization that expresses the threshold parameter as a function of the model parameters.
We show how the resulting optimization problem can be solved using standard gradient based methods.
arXiv Detail & Related papers (2021-07-23T00:04:39Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.