Black-Box Optimization with Local Generative Surrogates
- URL: http://arxiv.org/abs/2002.04632v2
- Date: Mon, 15 Jun 2020 19:49:24 GMT
- Title: Black-Box Optimization with Local Generative Surrogates
- Authors: Sergey Shirobokov, Vladislav Belavin, Michael Kagan, Andrey
Ustyuzhanin, At{\i}l{\i}m G\"une\c{s} Baydin
- Abstract summary: In fields such as physics and engineering, many processes are modeled with non-differentiable simulators with intractable likelihoods.
We introduce the use of deep generative models to approximate the simulator in local neighborhoods of the parameter space.
In cases where the dependence of the simulator on the parameter space is constrained to a low dimensional submanifold, we observe that our method attains minima faster than baseline methods.
- Score: 6.04055755718349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel method for gradient-based optimization of black-box
simulators using differentiable local surrogate models. In fields such as
physics and engineering, many processes are modeled with non-differentiable
simulators with intractable likelihoods. Optimization of these forward models
is particularly challenging, especially when the simulator is stochastic. To
address such cases, we introduce the use of deep generative models to
iteratively approximate the simulator in local neighborhoods of the parameter
space. We demonstrate that these local surrogates can be used to approximate
the gradient of the simulator, and thus enable gradient-based optimization of
simulator parameters. In cases where the dependence of the simulator on the
parameter space is constrained to a low dimensional submanifold, we observe
that our method attains minima faster than baseline methods, including Bayesian
optimization, numerical optimization, and approaches using score function
gradient estimators.
Related papers
- Accelerate Neural Subspace-Based Reduced-Order Solver of Deformable Simulation by Lipschitz Optimization [9.364019847856714]
Reduced-order simulation is an emerging method for accelerating physical simulations with high DOFs.
We propose a method for finding optimized subspace mappings, enabling further acceleration of neural reduced-order simulations.
We demonstrate the effectiveness of our approach through general cases in both quasi-static and dynamics simulations.
arXiv Detail & Related papers (2024-09-05T12:56:03Z) - Global Optimisation of Black-Box Functions with Generative Models in the Wasserstein Space [0.28675177318965045]
optimisation of black-box simulators is challenging for simulators and higher dimensions.
We use a deep generative surrogate approach to model the black box response for the entire parameter space.
We then leverage this knowledge to estimate the proposed uncertainty based on the Wasserstein distance.
arXiv Detail & Related papers (2024-07-16T17:09:47Z) - Multi-fidelity Constrained Optimization for Stochastic Black Box
Simulators [1.6385815610837167]
We introduce the algorithm Scout-Nd (Stochastic Constrained Optimization for N dimensions) to tackle the issues mentioned earlier.
Scout-Nd efficiently estimates the gradient, reduces the noise of the estimator gradient, and applies multi-fidelity schemes to further reduce computational effort.
We validate our approach on standard benchmarks, demonstrating its effectiveness in optimizing parameters highlighting better performance compared to existing methods.
arXiv Detail & Related papers (2023-11-25T23:36:38Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Surrogate Neural Networks for Efficient Simulation-based Trajectory
Planning Optimization [28.292234483886947]
This paper presents a novel methodology that uses surrogate models in the form of neural networks to reduce the computation time of simulation-based optimization of a reference trajectory.
We find a 74% better-performing reference trajectory compared to nominal, and the numerical results clearly show a substantial reduction in computation time for designing future trajectories.
arXiv Detail & Related papers (2023-03-30T15:44:30Z) - Optimization using Parallel Gradient Evaluations on Multiple Parameters [51.64614793990665]
We propose a first-order method for convex optimization, where gradients from multiple parameters can be used during each step of gradient descent.
Our method uses gradients from multiple parameters in synergy to update these parameters together towards the optima.
arXiv Detail & Related papers (2023-02-06T23:39:13Z) - Neural Posterior Estimation with Differentiable Simulators [58.720142291102135]
We present a new method to perform Neural Posterior Estimation (NPE) with a differentiable simulator.
We demonstrate how gradient information helps constrain the shape of the posterior and improves sample-efficiency.
arXiv Detail & Related papers (2022-07-12T16:08:04Z) - Differentiable Agent-Based Simulation for Gradient-Guided
Simulation-Based Optimization [0.0]
gradient estimation methods can be used to steer the optimization towards a local optimum.
In traffic signal timing optimization problems with high input dimension, the gradient-based methods exhibit substantially superior performance.
arXiv Detail & Related papers (2021-03-23T11:58:21Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering [53.523517926927894]
We explore the use of exact per-sample Hessian-vector products and gradients to construct self-tuning quadratics.
We prove that our model-based procedure converges in noisy gradient setting.
This is an interesting step for constructing self-tuning quadratics.
arXiv Detail & Related papers (2020-11-09T22:07:30Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.