Upper Trust Bound Feasibility Criterion for Mixed Constrained Bayesian
Optimization with Application to Aircraft Design
- URL: http://arxiv.org/abs/2005.05067v2
- Date: Tue, 12 May 2020 08:59:51 GMT
- Title: Upper Trust Bound Feasibility Criterion for Mixed Constrained Bayesian
Optimization with Application to Aircraft Design
- Authors: R\'emy Priem ((1) and (2)), Nathalie Bartoli (1), Youssef Diouane (2),
Alessandro Sgueglia ((1) and (2)) ((1) ONERA, DTIS, Universit\'ee de
Toulouse, Toulouse, France, (2) ISAE-SUPAERO, Universit\'ee de Toulouse,
Toulouse, 31055 Cedex 4, France)
- Abstract summary: We adapt the so-called super effcient global optimization algorithm to solve more accurately mixed constrained problems.
We show the good potential of the approach on a set of numerical experiments.
- Score: 41.74498230885008
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Bayesian optimization methods have been successfully applied to black box
optimization problems that are expensive to evaluate. In this paper, we adapt
the so-called super effcient global optimization algorithm to solve more
accurately mixed constrained problems. The proposed approach handles
constraints by means of upper trust bound, the latter encourages exploration of
the feasible domain by combining the mean prediction and the associated
uncertainty function given by the Gaussian processes. On top of that, a
refinement procedure, based on a learning rate criterion, is introduced to
enhance the exploitation and exploration trade-off. We show the good potential
of the approach on a set of numerical experiments. Finally, we present an
application to conceptual aircraft configuration upon which we show the
superiority of the proposed approach compared to a set of the state-of-the-art
black box optimization solvers. Keywords: Global Optimization, Mixed
Constrained Optimization, Black box optimization, Bayesian Optimization,
Gaussian Process.
Related papers
- A Novel Unified Parametric Assumption for Nonconvex Optimization [53.943470475510196]
Non optimization is central to machine learning, but the general framework non convexity enables weak convergence guarantees too pessimistic compared to the other hand.
We introduce a novel unified assumption in non convex algorithms.
arXiv Detail & Related papers (2025-02-17T21:25:31Z) - Indirect Query Bayesian Optimization with Integrated Feedback [17.66813850517961]
We develop a new class of Bayesian optimization problems where integrated feedback is given via a conditional expectation of the unknown function $f$ to be optimized.
The goal is to find the global optimum of $f$ by adaptively querying and observing in the space transformed by the conditional distribution.
This is motivated by real-world applications where one cannot access direct feedback due to privacy, hardware or computational constraints.
arXiv Detail & Related papers (2024-12-18T07:20:33Z) - Principled Preferential Bayesian Optimization [22.269732173306192]
We study the problem of preferential Bayesian optimization (BO)
We aim to optimize a black-box function with only preference feedback over a pair of candidate solutions.
An optimistic algorithm with an efficient computational method is then developed to solve the problem.
arXiv Detail & Related papers (2024-02-08T02:57:47Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - Bayesian Optimisation for Constrained Problems [0.0]
We propose a novel variant of the well-known Knowledge Gradient acquisition function that allows it to handle constraints.
We empirically compare the new algorithm with four other state-of-the-art constrained Bayesian optimisation algorithms and demonstrate its superior performance.
arXiv Detail & Related papers (2021-05-27T15:43:09Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Incorporating Expert Prior Knowledge into Experimental Design via
Posterior Sampling [58.56638141701966]
Experimenters can often acquire the knowledge about the location of the global optimum.
It is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization.
An efficient Bayesian optimization approach has been proposed via posterior sampling on the posterior distribution of the global optimum.
arXiv Detail & Related papers (2020-02-26T01:57:36Z) - Scalable Constrained Bayesian Optimization [10.820024633762596]
The global optimization of a high-dimensional black-box function under black-box constraints is a pervasive task in machine learning, control, and the scientific community.
We propose the scalable constrained Bayesian optimization (SCBO) algorithm that overcomes the above challenges and pushes the state-the-art.
arXiv Detail & Related papers (2020-02-20T01:48:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.