Adaptive Batching for Gaussian Process Surrogates with Application in
Noisy Level Set Estimation
- URL: http://arxiv.org/abs/2003.08579v2
- Date: Tue, 13 Jul 2021 05:56:02 GMT
- Title: Adaptive Batching for Gaussian Process Surrogates with Application in
Noisy Level Set Estimation
- Authors: Xiong Lyu and Mike Ludkovski
- Abstract summary: We develop adaptive replicated designs for the process metamodels of experiments.
We use four novel schemes: Multi-Level Adaptive (MLB), Ratcheted Stepwise Uncertainty Reduction (ABSUR), Design with Stepwise Allocation (ADSA) and Deterministic Design with Stepwise Allocation (DDSA)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop adaptive replicated designs for Gaussian process metamodels of
stochastic experiments. Adaptive batching is a natural extension of sequential
design heuristics with the benefit of replication growing as response features
are learned, inputs concentrate, and the metamodeling overhead rises. Motivated
by the problem of learning the level set of the mean simulator response we
develop four novel schemes: Multi-Level Batching (MLB), Ratchet Batching (RB),
Adaptive Batched Stepwise Uncertainty Reduction (ABSUR), Adaptive Design with
Stepwise Allocation (ADSA) and Deterministic Design with Stepwise Allocation
(DDSA). Our algorithms simultaneously (MLB, RB and ABSUR) or sequentially (ADSA
and DDSA) determine the sequential design inputs and the respective number of
replicates. Illustrations using synthetic examples and an application in
quantitative finance (Bermudan option pricing via Regression Monte Carlo) show
that adaptive batching brings significant computational speed-ups with minimal
loss of modeling fidelity.
Related papers
- Reinforcement learning for anisotropic p-adaptation and error estimation in high-order solvers [0.37109226820205005]
We present a novel approach to automate and optimize anisotropic p-adaptation in high-order h/p using Reinforcement Learning (RL)
We develop an offline training approach, decoupled from the main solver, which shows minimal overcost when performing simulations.
We derive an inexpensive RL-based error estimation approach that enables the quantification of local discretization errors.
arXiv Detail & Related papers (2024-07-26T17:55:23Z) - HAAP: Vision-context Hierarchical Attention Autoregressive with Adaptive Permutation for Scene Text Recognition [17.412985505938508]
Internal Language Model (LM)-based methods use permutation language modeling (PLM) to solve the error correction caused by conditional independence in external LM-based methods.
This paper proposes the Hierarchical Attention autoregressive Model with Adaptive Permutation (HAAP) to enhance the location-context-image interaction capability.
arXiv Detail & Related papers (2024-05-15T06:41:43Z) - Let's reward step by step: Step-Level reward model as the Navigators for
Reasoning [64.27898739929734]
Process-Supervised Reward Model (PRM) furnishes LLMs with step-by-step feedback during the training phase.
We propose a greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs.
To explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks.
arXiv Detail & Related papers (2023-10-16T05:21:50Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Quantum Natural Gradient with Efficient Backtracking Line Search [0.0]
We present an adaptive implementation of QNGD based on Armijo's rule, which is an efficient backtracking line search.
Our results are yet another confirmation of the importance of differential geometry in variational quantum computations.
arXiv Detail & Related papers (2022-11-01T17:29:32Z) - A Modular Framework for Reinforcement Learning Optimal Execution [68.8204255655161]
We develop a modular framework for the application of Reinforcement Learning to the problem of Optimal Trade Execution.
The framework is designed with flexibility in mind, in order to ease the implementation of different simulation setups.
arXiv Detail & Related papers (2022-08-11T09:40:42Z) - Coarse-to-Fine Embedded PatchMatch and Multi-Scale Dynamic Aggregation
for Reference-based Super-Resolution [48.093500219958834]
We propose an Accelerated Multi-Scale Aggregation network (AMSA) for Reference-based Super-Resolution.
The proposed AMSA achieves superior performance over state-of-the-art approaches on both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2022-01-12T08:40:23Z) - Model Selection for Bayesian Autoencoders [25.619565817793422]
We propose to optimize the distributional sliced-Wasserstein distance between the output of the autoencoder and the empirical data distribution.
We turn our BAE into a generative model by fitting a flexible Dirichlet mixture model in the latent space.
We evaluate our approach qualitatively and quantitatively using a vast experimental campaign on a number of unsupervised learning tasks and show that, in small-data regimes where priors matter, our approach provides state-of-the-art results.
arXiv Detail & Related papers (2021-06-11T08:55:00Z) - Reinforcement Learning for Adaptive Mesh Refinement [63.7867809197671]
We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning to train refinement policies directly from simulation.
The model sizes of these policy architectures are independent of the mesh size and hence scale to arbitrarily large and complex simulations.
arXiv Detail & Related papers (2021-03-01T22:55:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.