Adaptive Replication Strategies in Trust-Region-Based Bayesian Optimization of Stochastic Functions
- URL: http://arxiv.org/abs/2504.20527v1
- Date: Tue, 29 Apr 2025 08:13:16 GMT
- Title: Adaptive Replication Strategies in Trust-Region-Based Bayesian Optimization of Stochastic Functions
- Authors: Mickael Binois, Jeffrey Larson,
- Abstract summary: We develop and analyze a method for simulation optimization relying on Gaussian process models.<n>We propose several schemes to encourage replication, from the choice of the acquisition function to setup evaluation costs.
- Score: 0.8379286663107846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop and analyze a method for stochastic simulation optimization relying on Gaussian process models within a trust-region framework. We are interested in the case when the variance of the objective function is large. We propose to rely on replication and local modeling to cope with this high-throughput regime, where the number of evaluations may become large to get accurate results while still keeping good performance. We propose several schemes to encourage replication, from the choice of the acquisition function to setup evaluation costs. Compared with existing methods, our results indicate good scaling, in terms of both accuracy (several orders of magnitude better than existing methods) and speed (taking into account evaluation costs).
Related papers
- Stochastic Optimization with Optimal Importance Sampling [49.484190237840714]
We propose an iterative-based algorithm that jointly updates the decision and the IS distribution without requiring time-scale separation between the two.<n>Our method achieves the lowest possible variable variance and guarantees global convergence under convexity of the objective and mild assumptions on the IS distribution family.
arXiv Detail & Related papers (2025-04-04T16:10:18Z) - Semiparametric Counterfactual Regression [2.356908851188234]
We propose a doubly robust-style estimator for counterfactual regression within a generalizable framework.<n>Our approach uses incremental interventions to enhance adaptability while maintaining with standard methods.<n>Our analysis shows that the proposed estimators can achieve $sqrn$-consistency and normality for a broad class of problems.
arXiv Detail & Related papers (2025-04-03T15:32:26Z) - Truncating Trajectories in Monte Carlo Policy Evaluation: an Adaptive Approach [51.76826149868971]
Policy evaluation via Monte Carlo simulation is at the core of many MC Reinforcement Learning (RL) algorithms.
We propose as a quality index a surrogate of the mean squared error of a return estimator that uses trajectories of different lengths.
We present an adaptive algorithm called Robust and Iterative Data collection strategy Optimization (RIDO)
arXiv Detail & Related papers (2024-10-17T11:47:56Z) - Differentiating Policies for Non-Myopic Bayesian Optimization [5.793371273485735]
We show how to efficiently estimate rollout functions and their gradient, enabling sampling policies.
In this paper, we show how to efficiently estimate rollout functions and their gradient, enabling sampling policies.
arXiv Detail & Related papers (2024-08-14T21:00:58Z) - Optimal simulation-based Bayesian decisions [0.0]
We present a framework for the efficient computation of optimal Bayesian decisions under intractable likelihoods.
We develop active learning schemes to choose where in parameter and action spaces to simulate.
The resulting framework is extremely simulation efficient, typically requiring fewer model calls than the associated posterior inference task alone.
arXiv Detail & Related papers (2023-11-09T20:59:52Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - Bayesian Optimization with Informative Covariance [13.113313427848828]
We propose novel informative covariance functions for optimization, leveraging nonstationarity to encode preferences for certain regions of the search space.
We demonstrate that the proposed functions can increase the sample efficiency of Bayesian optimization in high dimensions, even under weak prior information.
arXiv Detail & Related papers (2022-08-04T15:05:11Z) - Real-Time Optimization Meets Bayesian Optimization and Derivative-Free
Optimization: A Tale of Modifier Adaptation [0.0]
This paper investigates a new class of modifier-adaptation schemes to overcome plant-model mismatch in real-time optimization of uncertain processes.
The proposed schemes embed a physical model and rely on trust-region ideas to minimize risk during the exploration.
The benefits of using an acquisition function, knowing the process noise level, or specifying a nominal process model are illustrated.
arXiv Detail & Related papers (2020-09-18T12:57:17Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Real-Time Regression with Dividing Local Gaussian Processes [62.01822866877782]
Local Gaussian processes are a novel, computationally efficient modeling approach based on Gaussian process regression.
Due to an iterative, data-driven division of the input space, they achieve a sublinear computational complexity in the total number of training points in practice.
A numerical evaluation on real-world data sets shows their advantages over other state-of-the-art methods in terms of accuracy as well as prediction and update speed.
arXiv Detail & Related papers (2020-06-16T18:43:31Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.