Building high accuracy emulators for scientific simulations with deep
neural architecture search
- URL: http://arxiv.org/abs/2001.08055v2
- Date: Thu, 8 Oct 2020 14:42:35 GMT
- Title: Building high accuracy emulators for scientific simulations with deep
neural architecture search
- Authors: M. F. Kasim, D. Watson-Parris, L. Deaconu, S. Oliver, P. Hatfield, D.
H. Froula, G. Gregori, M. Jarvis, S. Khatiwala, J. Korenaga, J.
Topp-Mugglestone, E. Viezzer, S. M. Vinko
- Abstract summary: A promising route to accelerate simulations by building fast emulators with machine learning requires large training datasets.
Here we present a method based on neural architecture search to build accurate emulators even with a limited number of training data.
The method successfully accelerates simulations by up to 2 billion times in 10 scientific cases including astrophysics, climate science, biogeochemistry, high energy density physics, fusion energy, and seismology.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Computer simulations are invaluable tools for scientific discovery. However,
accurate simulations are often slow to execute, which limits their
applicability to extensive parameter exploration, large-scale data analysis,
and uncertainty quantification. A promising route to accelerate simulations by
building fast emulators with machine learning requires large training datasets,
which can be prohibitively expensive to obtain with slow simulations. Here we
present a method based on neural architecture search to build accurate
emulators even with a limited number of training data. The method successfully
accelerates simulations by up to 2 billion times in 10 scientific cases
including astrophysics, climate science, biogeochemistry, high energy density
physics, fusion energy, and seismology, using the same super-architecture,
algorithm, and hyperparameters. Our approach also inherently provides emulator
uncertainty estimation, adding further confidence in their use. We anticipate
this work will accelerate research involving expensive simulations, allow more
extensive parameters exploration, and enable new, previously unfeasible
computational discovery.
Related papers
- Embed and Emulate: Contrastive representations for simulation-based inference [11.543221890134399]
This paper introduces Embed and Emulate (E&E), a new simulation-based inference ( SBI) method based on contrastive learning.
E&E learns a low-dimensional latent embedding of the data and a corresponding fast emulator in the latent space.
We demonstrate superior performance over existing methods in a realistic, non-identifiable parameter estimation task.
arXiv Detail & Related papers (2024-09-27T02:37:01Z) - Rethinking materials simulations: Blending direct numerical simulations
with neural operators [1.6874375111244329]
We develop a new method that blends numerical solvers with neural operators to accelerate such simulations.
We demonstrate the effectiveness of this framework on simulations of microstructure evolution during physical vapor deposition.
arXiv Detail & Related papers (2023-12-08T23:44:54Z) - Simulating Quantum Computations on Classical Machines: A Survey [0.0]
We study an exhaustive set of 150+ simulators and quantum libraries.
We short-list the simulators that are actively maintained and enable simulation of quantum algorithms for more than 10 qubits.
We provide a taxonomy of the most important simulation methods, namely Schrodinger-based, Feynman path integrals, Heisenberg-based, and hybrid methods.
arXiv Detail & Related papers (2023-11-28T04:48:15Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - Towards Complex Dynamic Physics System Simulation with Graph Neural ODEs [75.7104463046767]
This paper proposes a novel learning based simulation model that characterizes the varying spatial and temporal dependencies in particle systems.
We empirically evaluate GNSTODE's simulation performance on two real-world particle systems, Gravity and Coulomb.
arXiv Detail & Related papers (2023-05-21T03:51:03Z) - Continual learning autoencoder training for a particle-in-cell
simulation via streaming [52.77024349608834]
upcoming exascale era will provide a new generation of physics simulations with high resolution.
These simulations will have a high resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible.
This work presents an approach that trains a neural network concurrently to a running simulation without data on a disk.
arXiv Detail & Related papers (2022-11-09T09:55:14Z) - Simulation-Based Parallel Training [55.41644538483948]
We present our ongoing work to design a training framework that alleviates those bottlenecks.
It generates data in parallel with the training process.
We present a strategy to mitigate this bias with a memory buffer.
arXiv Detail & Related papers (2022-11-08T09:31:25Z) - Online Planning in POMDPs with Self-Improving Simulators [17.722070992253638]
We learn online an approximate but much faster simulator that improves over time.
To plan reliably and efficiently while the approximate simulator is learning, we develop a method that adaptively decides which simulator to use for every simulation.
Experimental results in two large domains show that when integrated with POMCP, our approach allows to plan with improving efficiency over time.
arXiv Detail & Related papers (2022-01-27T09:41:59Z) - Robot Learning from Randomized Simulations: A Review [59.992761565399185]
Deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data.
State-of-the-art approaches learn in simulation where data generation is fast as well as inexpensive.
We focus on a technique named 'domain randomization' which is a method for learning from randomized simulations.
arXiv Detail & Related papers (2021-11-01T13:55:41Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.