Deep Bayesian Active Learning for Accelerating Stochastic Simulation
- URL: http://arxiv.org/abs/2106.02770v7
- Date: Mon, 5 Jun 2023 02:29:34 GMT
- Title: Deep Bayesian Active Learning for Accelerating Stochastic Simulation
- Authors: Dongxia Wu, Ruijia Niu, Matteo Chinazzi, Alessandro Vespignani, Yi-An
Ma, Rose Yu
- Abstract summary: Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
- Score: 74.58219903138301
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stochastic simulations such as large-scale, spatiotemporal, age-structured
epidemic models are computationally expensive at fine-grained resolution. While
deep surrogate models can speed up the simulations, doing so for stochastic
simulations and with active learning approaches is an underexplored area. We
propose Interactive Neural Process (INP), a deep Bayesian active learning
framework for learning deep surrogate models to accelerate stochastic
simulations. INP consists of two components, a spatiotemporal surrogate model
built upon Neural Process (NP) family and an acquisition function for active
learning. For surrogate modeling, we develop Spatiotemporal Neural Process
(STNP) to mimic the simulator dynamics. For active learning, we propose a novel
acquisition function, Latent Information Gain (LIG), calculated in the latent
space of NP based models. We perform a theoretical analysis and demonstrate
that LIG reduces sample complexity compared with random sampling in high
dimensions. We also conduct empirical studies on three complex spatiotemporal
simulators for reaction diffusion, heat flow, and infectious disease. The
results demonstrate that STNP outperforms the baselines in the offline learning
setting and LIG achieves the state-of-the-art for Bayesian active learning.
Related papers
- Feasibility Study on Active Learning of Smart Surrogates for Scientific Simulations [4.368891765870579]
We investigate the potential of incorporating active learning into deep neural networks (DNNs) surrogate training.
This allows intelligent and objective selection of training simulations, reducing the need to generate extensive simulation data.
The results set the groundwork for developing the high-performance computing infrastructure for Smart Surrogates.
arXiv Detail & Related papers (2024-07-10T14:00:20Z) - A Multi-Grained Symmetric Differential Equation Model for Learning
Protein-Ligand Binding Dynamics [74.93549765488103]
In drug discovery, molecular dynamics simulation provides a powerful tool for predicting binding affinities, estimating transport properties, and exploring pocket sites.
We propose NeuralMD, the first machine learning surrogate that can facilitate numerical MD and provide accurate simulations in protein-ligand binding.
We show the efficiency and effectiveness of NeuralMD, with a 2000$times$ speedup over standard numerical MD simulation and outperforming all other ML approaches by up to 80% under the stability metric.
arXiv Detail & Related papers (2024-01-26T09:35:17Z) - Learning to Simulate: Generative Metamodeling via Quantile Regression [2.2518304637809714]
We propose a new metamodeling concept, called generative metamodeling, which aims to construct a "fast simulator of the simulator"
Once constructed, a generative metamodel can generate a large amount of random outputs as soon as the inputs are specified.
We propose a new algorithm -- quantile-regression-based generative metamodeling (QRGMM) -- and study its convergence and rate of convergence.
arXiv Detail & Related papers (2023-11-29T16:46:24Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Neural Posterior Estimation with Differentiable Simulators [58.720142291102135]
We present a new method to perform Neural Posterior Estimation (NPE) with a differentiable simulator.
We demonstrate how gradient information helps constrain the shape of the posterior and improves sample-efficiency.
arXiv Detail & Related papers (2022-07-12T16:08:04Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Automatic Evolution of Machine-Learning based Quantum Dynamics with
Uncertainty Analysis [4.629634111796585]
The long short-term memory recurrent neural network (LSTM-RNN) models are used to simulate the long-time quantum dynamics.
This work builds an effective machine learning approach to simulate the dynamics evolution of open quantum systems.
arXiv Detail & Related papers (2022-05-07T08:53:55Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Learning Accurate Business Process Simulation Models from Event Logs via
Automated Process Discovery and Deep Learning [0.8164433158925593]
Data-Driven Simulation (DDS) methods learn process simulation models from event logs.
Deep Learning (DL) models are able to accurately capture such temporal dynamics.
This paper presents a hybrid approach to learn process simulation models from event logs.
arXiv Detail & Related papers (2021-03-22T15:34:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.