Reactive Sample Size for Heuristic Search in Simulation-based
Optimization
- URL: http://arxiv.org/abs/2005.12141v1
- Date: Mon, 25 May 2020 14:38:55 GMT
- Title: Reactive Sample Size for Heuristic Search in Simulation-based
Optimization
- Authors: Manuel Dalcastagn\'e, Andrea Mariello, Roberto Battiti
- Abstract summary: This paper presents a novel reactive sample size algorithm based on parametric tests and indifference-zone selection.
Tests employ benchmark functions extended with artificial levels of noise and a simulation-based optimization tool for hotel revenue management.
- Score: 2.9005223064604073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In simulation-based optimization, the optimal setting of the input parameters
of the objective function can be determined by heuristic optimization
techniques. However, when simulators model the stochasticity of real-world
problems, their output is a random variable and multiple evaluations of the
objective function are necessary to properly compare the expected performance
of different parameter settings. This paper presents a novel reactive sample
size algorithm based on parametric tests and indifference-zone selection, which
can be used for improving the efficiency and robustness of heuristic
optimization methods. The algorithm reactively decides, in an online manner,
the sample size to be used for each comparison during the optimization
according to observed statistical evidence. Tests employ benchmark functions
extended with artificial levels of noise and a simulation-based optimization
tool for hotel revenue management. Experimental results show that the reactive
method can improve the efficiency and robustness of simulation-based
optimization techniques.
Related papers
- Quantum-Enhanced Simulation-Based Optimization for Newsvendor Problems [5.500172106704342]
We exploit the enhanced efficiency of Quantum Amplitude Estimation (QAE) compared to classical Monte Carlo simulation.
In this work, we make use of a quantum-enhanced algorithm for simulation-based optimization and apply it to solve a variant of the classical News problem known to be NP-hard.
arXiv Detail & Related papers (2024-03-26T05:14:50Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Multi-fidelity Constrained Optimization for Stochastic Black Box
Simulators [1.6385815610837167]
We introduce the algorithm Scout-Nd (Stochastic Constrained Optimization for N dimensions) to tackle the issues mentioned earlier.
Scout-Nd efficiently estimates the gradient, reduces the noise of the estimator gradient, and applies multi-fidelity schemes to further reduce computational effort.
We validate our approach on standard benchmarks, demonstrating its effectiveness in optimizing parameters highlighting better performance compared to existing methods.
arXiv Detail & Related papers (2023-11-25T23:36:38Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Bayesian multi-objective optimization for stochastic simulators: an
extension of the Pareto Active Learning method [0.0]
This article focuses on the multi-objective optimization of simulators with high output variance.
We rely on Bayesian optimization algorithms to make predictions about the functions to be optimized.
arXiv Detail & Related papers (2022-07-08T11:51:48Z) - Adaptive Sampling Quasi-Newton Methods for Zeroth-Order Stochastic
Optimization [1.7513645771137178]
We consider unconstrained optimization problems with no available gradient information.
We propose an adaptive sampling quasi-Newton method where we estimate the gradients of a simulation function using finite differences within a common random number framework.
We develop modified versions of a norm test and an inner product quasi-Newton test to control the sample sizes used in the approximations and provide global convergence results to the neighborhood of the optimal solution.
arXiv Detail & Related papers (2021-09-24T21:49:25Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - BOSH: Bayesian Optimization by Sampling Hierarchically [10.10241176664951]
We propose a novel BO routine pairing a hierarchical Gaussian process with an information-theoretic framework to generate a growing pool of realizations.
We demonstrate that BOSH provides more efficient and higher-precision optimization than standard BO across synthetic benchmarks, simulation optimization, reinforcement learning and hyper- parameter tuning tasks.
arXiv Detail & Related papers (2020-07-02T07:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.