Opportunistic Emulation of Computationally Expensive Simulations via
Deep Learning
- URL: http://arxiv.org/abs/2108.11057v1
- Date: Wed, 25 Aug 2021 05:57:16 GMT
- Title: Opportunistic Emulation of Computationally Expensive Simulations via
Deep Learning
- Authors: Conrad Sanderson, Dan Pagendam, Brendan Power, Frederick Bennett, Ross
Darnell
- Abstract summary: We investigate the use of deep neural networks for opportunistic model emulation of APSIM models.
We focus on emulating four important outputs of the APSIM model: runoff, soil_loss, DINrunoff, Nleached.
- Score: 9.13837510233406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the underlying aim of increasing efficiency of computational modelling
pertinent for managing and protecting the Great Barrier Reef, we investigate
the use of deep neural networks for opportunistic model emulation of APSIM
models by repurposing an existing large dataset containing the outputs of APSIM
model runs. The dataset has not been specifically tailored for the model
emulation task. We employ two neural network architectures for the emulation
task: densely connected feed-forward neural network (FFNN), and gated recurrent
unit feeding into FFNN (GRU-FFNN), a type of a recurrent neural network.
Various configurations of the architectures are trialled. A minimum correlation
statistic is employed to identify clusters of APSIM scenarios that can be
aggregated to form training sets for model emulation. We focus on emulating
four important outputs of the APSIM model: runoff, soil_loss, DINrunoff,
Nleached. The GRU-FFNN architecture with three hidden layers and 128 units per
layer provides good emulation of runoff and DINrunoff. However, soil_loss and
Nleached were emulated relatively poorly under a wide range of the considered
architectures; the emulators failed to capture variability at higher values of
these two outputs. While the opportunistic data available from past modelling
activities provides a large and useful dataset for exploring APSIM emulation,
it may not be sufficiently rich enough for successful deep learning of more
complex model dynamics. Design of Computer Experiments may be required to
generate more informative data to emulate all output variables of interest. We
also suggest the use of synthetic meteorology settings to allow the model to be
fed a wide range of inputs. These need not all be representative of normal
conditions, but can provide a denser, more informative dataset from which
complex relationships between input and outputs can be learned.
Related papers
- Learning from the Giants: A Practical Approach to Underwater Depth and Surface Normals Estimation [3.0516727053033392]
This paper presents a novel deep learning model for Monocular Depth and Surface Normals Estimation (MDSNE)
It is specifically tailored for underwater environments, using a hybrid architecture that integrates CNNs with Transformers.
Our model reduces parameters by 90% and training costs by 80%, allowing real-time 3D perception on resource-constrained devices.
arXiv Detail & Related papers (2024-10-02T22:41:12Z) - POMONAG: Pareto-Optimal Many-Objective Neural Architecture Generator [4.09225917049674]
Transferable NAS has emerged, generalizing the search process from dataset-dependent to task-dependent.
This paper introduces POMONAG, extending DiffusionNAG via a many-optimal diffusion process.
Results were validated on two search spaces -- NAS201 and MobileNetV3 -- and evaluated across 15 image classification datasets.
arXiv Detail & Related papers (2024-09-30T16:05:29Z) - Accurate deep learning sub-grid scale models for large eddy simulations [0.0]
We present two families of sub-grid scale (SGS) turbulence models developed for large-eddy simulation (LES) purposes.
Their development required the formulation of physics-informed robust and efficient Deep Learning (DL) algorithms.
Explicit filtering of data from direct simulations of canonical channel flow at two friction Reynolds numbers provided accurate data for training and testing.
arXiv Detail & Related papers (2023-07-19T15:30:06Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Learning Large-scale Subsurface Simulations with a Hybrid Graph Network
Simulator [57.57321628587564]
We introduce Hybrid Graph Network Simulator (HGNS) for learning reservoir simulations of 3D subsurface fluid flows.
HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.
Using an industry-standard subsurface flow dataset (SPE-10) with 1.1 million cells, we demonstrate that HGNS is able to reduce the inference time up to 18 times compared to standard subsurface simulators.
arXiv Detail & Related papers (2022-06-15T17:29:57Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Investigating the Relationship Between Dropout Regularization and Model
Complexity in Neural Networks [0.0]
Dropout Regularization serves to reduce variance in Deep Learning models.
We explore the relationship between the dropout rate and model complexity by training 2,000 neural networks.
We build neural networks that predict the optimal dropout rate given the number of hidden units in each dense layer.
arXiv Detail & Related papers (2021-08-14T23:49:33Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Modeling extra-deep electromagnetic logs using a deep neural network [0.415623340386296]
Modern geosteering is heavily dependent on real-time interpretation of deep electromagnetic (EM) measurements.
We present a methodology to construct a deep neural network (DNN) model trained to reproduce a full set of extra-deep EM logs.
The model is trained in a 1D layered environment consisting of up to seven layers with different resistivity values.
arXiv Detail & Related papers (2020-05-18T17:45:46Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.