Bayesian multi-objective optimization for stochastic simulators: an
extension of the Pareto Active Learning method
- URL: http://arxiv.org/abs/2207.03842v1
- Date: Fri, 8 Jul 2022 11:51:48 GMT
- Title: Bayesian multi-objective optimization for stochastic simulators: an
extension of the Pareto Active Learning method
- Authors: Bruno Barracosa (L2S, GdR MASCOT-NUM), Julien Bect (L2S, GdR
MASCOT-NUM), H\'elo\"ise Dutrieux Baraffe, Juliette Morin, Josselin Fournel,
Emmanuel Vazquez (L2S, GdR MASCOT-NUM)
- Abstract summary: This article focuses on the multi-objective optimization of simulators with high output variance.
We rely on Bayesian optimization algorithms to make predictions about the functions to be optimized.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article focuses on the multi-objective optimization of stochastic
simulators with high output variance, where the input space is finite and the
objective functions are expensive to evaluate. We rely on Bayesian optimization
algorithms, which use probabilistic models to make predictions about the
functions to be optimized. The proposed approach is an extension of the Pareto
Active Learning (PAL) algorithm for the estimation of Pareto-optimal solutions
that makes it suitable for the stochastic setting. We named it Pareto Active
Learning for Stochastic Simulators (PALS). The performance of PALS is assessed
through numerical experiments over a set of bi-dimensional, bi-objective test
problems. PALS exhibits superior performance when compared to other
scalarization-based and random-search approaches.
Related papers
- Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Simulation Based Bayesian Optimization [0.6526824510982799]
This paper introduces Simulation Based Bayesian Optimization (SBBO) as a novel approach to optimizing acquisition functions.
SBBO allows the use of surrogate models tailored for spaces with discrete variables.
We demonstrate empirically the effectiveness of SBBO method using various choices of surrogate models.
arXiv Detail & Related papers (2024-01-19T16:56:11Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Multi-objective hyperparameter optimization with performance uncertainty [62.997667081978825]
This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of Machine Learning algorithms.
We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise.
Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR.
arXiv Detail & Related papers (2022-09-09T14:58:43Z) - B\'ezier Flow: a Surface-wise Gradient Descent Method for
Multi-objective Optimization [12.487037582320804]
We extend the stability of optimization algorithms in the sense of Probability Approximately Correct (PAC) learning.
We show that multi-objective optimization algorithms derived from a gradient descent-based single-objective optimization algorithm are PAC stable.
arXiv Detail & Related papers (2022-05-23T07:47:58Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Stochastic Learning Approach to Binary Optimization for Optimal Design
of Experiments [0.0]
We present a novel approach to binary optimization for optimal experimental design (OED) for Bayesian inverse problems governed by mathematical models such as partial differential equations.
The OED utility function, namely, the regularized optimality gradient, is cast into an objective function in the form of an expectation over a Bernoulli distribution.
The objective is then solved by using a probabilistic optimization routine to find an optimal observational policy.
arXiv Detail & Related papers (2021-01-15T03:54:12Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - BOSH: Bayesian Optimization by Sampling Hierarchically [10.10241176664951]
We propose a novel BO routine pairing a hierarchical Gaussian process with an information-theoretic framework to generate a growing pool of realizations.
We demonstrate that BOSH provides more efficient and higher-precision optimization than standard BO across synthetic benchmarks, simulation optimization, reinforcement learning and hyper- parameter tuning tasks.
arXiv Detail & Related papers (2020-07-02T07:35:49Z) - Reactive Sample Size for Heuristic Search in Simulation-based
Optimization [2.9005223064604073]
This paper presents a novel reactive sample size algorithm based on parametric tests and indifference-zone selection.
Tests employ benchmark functions extended with artificial levels of noise and a simulation-based optimization tool for hotel revenue management.
arXiv Detail & Related papers (2020-05-25T14:38:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.