Bayesian Optimisation vs. Input Uncertainty Reduction
- URL: http://arxiv.org/abs/2006.00643v1
- Date: Sun, 31 May 2020 23:42:22 GMT
- Title: Bayesian Optimisation vs. Input Uncertainty Reduction
- Authors: Juan Ungredda, Michael Pearce, Juergen Branke
- Abstract summary: We consider the trade-off between simulation and real data collection in order to find the optimal solution of the simulator with the true inputs.
We propose a novel unified simulation optimisation procedure called Bayesian Information Collection and optimisation.
- Score: 1.0497128347190048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulators often require calibration inputs estimated from real world data
and the quality of the estimate can significantly affect simulation output.
Particularly when performing simulation optimisation to find an optimal
solution, the uncertainty in the inputs significantly affects the quality of
the found solution. One remedy is to search for the solution that has the best
performance on average over the uncertain range of inputs yielding an optimal
compromise solution. We consider the more general setting where a user may
choose between either running simulations or instead collecting real world
data. A user may choose an input and a solution and observe the simulation
output, or instead query an external data source improving the input estimate
enabling the search for a more focused, less compromised solution. We
explicitly examine the trade-off between simulation and real data collection in
order to find the optimal solution of the simulator with the true inputs. Using
a value of information procedure, we propose a novel unified simulation
optimisation procedure called Bayesian Information Collection and Optimisation
(BICO) that, in each iteration, automatically determines which of the two
actions (running simulations or data collection) is more beneficial. Numerical
experiments demonstrate that the proposed algorithm is able to automatically
determine an appropriate balance between optimisation and data collection.
Related papers
- Bayesian Adaptive Calibration and Optimal Design [16.821341360894706]
Current machine learning approaches mostly rely on rerunning simulations over a fixed set of designs available in the observed data.
We propose a data-efficient algorithm to run maximally informative simulations within a batch-sequential process.
We show the benefits of our method when compared to related approaches across synthetic and real-data problems.
arXiv Detail & Related papers (2024-05-23T11:14:35Z) - Optimize-via-Predict: Realizing out-of-sample optimality in data-driven
optimization [0.0]
We examine a formulation for data-driven optimization wherein the decision-maker is not privy to the true distribution.
We define a prescriptive solution as a decisionvendor rule mapping such a data set to decisions.
We present an optimization problem that would solve for such an out-of-sample optimal solution, and does so efficiently by a combination of sampling and bisection search algorithms.
arXiv Detail & Related papers (2023-09-20T08:48:50Z) - Surrogate Neural Networks for Efficient Simulation-based Trajectory
Planning Optimization [28.292234483886947]
This paper presents a novel methodology that uses surrogate models in the form of neural networks to reduce the computation time of simulation-based optimization of a reference trajectory.
We find a 74% better-performing reference trajectory compared to nominal, and the numerical results clearly show a substantial reduction in computation time for designing future trajectories.
arXiv Detail & Related papers (2023-03-30T15:44:30Z) - Data-Driven Offline Decision-Making via Invariant Representation
Learning [97.49309949598505]
offline data-driven decision-making involves synthesizing optimized decisions with no active interaction.
A key challenge is distributional shift: when we optimize with respect to the input into a model trained from offline data, it is easy to produce an out-of-distribution (OOD) input that appears erroneously good.
In this paper, we formulate offline data-driven decision-making as domain adaptation, where the goal is to make accurate predictions for the value of optimized decisions.
arXiv Detail & Related papers (2022-11-21T11:01:37Z) - Data-driven evolutionary algorithm for oil reservoir well-placement and
control optimization [3.012067935276772]
Generalized data-driven evolutionary algorithm (GDDE) is proposed to reduce the number of simulation runs on well-placement and control optimization problems.
Probabilistic neural network (PNN) is adopted as the classifier to select informative and promising candidates.
arXiv Detail & Related papers (2022-06-07T09:07:49Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - AutoSimulate: (Quickly) Learning Synthetic Data Generation [70.82315853981838]
We propose an efficient alternative for optimal synthetic data generation based on a novel differentiable approximation of the objective.
We demonstrate that the proposed method finds the optimal data distribution faster (up to $50times$), with significantly reduced training data generation (up to $30times$) and better accuracy ($+8.7%$) on real-world test datasets than previous methods.
arXiv Detail & Related papers (2020-08-16T11:36:11Z) - Continuous Optimization Benchmarks by Simulation [0.0]
Benchmark experiments are required to test, compare, tune, and understand optimization algorithms.
Data from previous evaluations can be used to train surrogate models which are then used for benchmarking.
We show that the spectral simulation method enables simulation for continuous optimization problems.
arXiv Detail & Related papers (2020-08-14T08:50:57Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z) - Model Inversion Networks for Model-Based Optimization [110.24531801773392]
We propose model inversion networks (MINs), which learn an inverse mapping from scores to inputs.
MINs can scale to high-dimensional input spaces and leverage offline logged data for both contextual and non-contextual optimization problems.
We evaluate MINs on tasks from the Bayesian optimization literature, high-dimensional model-based optimization problems over images and protein designs, and contextual bandit optimization from logged data.
arXiv Detail & Related papers (2019-12-31T18:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.