SciAI4Industry -- Solving PDEs for industry-scale problems with deep
learning
- URL: http://arxiv.org/abs/2211.12709v1
- Date: Wed, 23 Nov 2022 05:15:32 GMT
- Title: SciAI4Industry -- Solving PDEs for industry-scale problems with deep
learning
- Authors: Philipp A. Witte, Russell J. Hewett, Kumar Saurabh, AmirHossein
Sojoodi, Ranveer Chandra
- Abstract summary: We introduce a distributed programming API for simulating training data in parallel on the cloud without requiring users to manage the underlying HPC infrastructure.
We train large-scale neural networks for solving the 3D Navier-Stokes equation and simulating 3D CO2 flow in porous media.
For the CO2 example, we simulate a training dataset based on a commercial carbon capture and storage (CCS) project and train a neural network for CO2 flow simulation on a 3D grid with over 2 million cells that is 5 orders of magnitudes faster than a conventional numerical simulator and 3,200 times cheaper.
- Score: 1.642765885524881
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Solving partial differential equations with deep learning makes it possible
to reduce simulation times by multiple orders of magnitude and unlock
scientific methods that typically rely on large numbers of sequential
simulations, such as optimization and uncertainty quantification. Two of the
largest challenges of adopting scientific AI for industrial problem settings is
that training datasets must be simulated in advance and that neural networks
for solving large-scale PDEs exceed the memory capabilities of current GPUs. We
introduce a distributed programming API in the Julia language for simulating
training data in parallel on the cloud and without requiring users to manage
the underlying HPC infrastructure. In addition, we show that model-parallel
deep learning based on domain decomposition allows us to scale neural networks
for solving PDEs to commercial-scale problem settings and achieve above 90%
parallel efficiency. Combining our cloud API for training data generation and
model-parallel deep learning, we train large-scale neural networks for solving
the 3D Navier-Stokes equation and simulating 3D CO2 flow in porous media. For
the CO2 example, we simulate a training dataset based on a commercial carbon
capture and storage (CCS) project and train a neural network for CO2 flow
simulation on a 3D grid with over 2 million cells that is 5 orders of
magnitudes faster than a conventional numerical simulator and 3,200 times
cheaper.
Related papers
- Learning-Based Finite Element Methods Modeling for Complex Mechanical Systems [1.6977525619006286]
Complex mechanic systems simulation is important in many real-world applications.
Recent CNN or GNN-based simulation models still struggle to effectively represent complex mechanic simulation.
In this paper, we propose a novel two-level mesh graph network.
arXiv Detail & Related papers (2024-08-30T15:56:50Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Continual learning autoencoder training for a particle-in-cell
simulation via streaming [52.77024349608834]
upcoming exascale era will provide a new generation of physics simulations with high resolution.
These simulations will have a high resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible.
This work presents an approach that trains a neural network concurrently to a running simulation without data on a disk.
arXiv Detail & Related papers (2022-11-09T09:55:14Z) - Learning Large-scale Subsurface Simulations with a Hybrid Graph Network
Simulator [57.57321628587564]
We introduce Hybrid Graph Network Simulator (HGNS) for learning reservoir simulations of 3D subsurface fluid flows.
HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.
Using an industry-standard subsurface flow dataset (SPE-10) with 1.1 million cells, we demonstrate that HGNS is able to reduce the inference time up to 18 times compared to standard subsurface simulators.
arXiv Detail & Related papers (2022-06-15T17:29:57Z) - Training Deep Neural Networks with Constrained Learning Parameters [4.917317902787792]
A significant portion of deep learning tasks would run on edge computing systems.
We propose the Combinatorial Neural Network Training Algorithm (CoNNTrA)
CoNNTrA trains deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets.
Our results indicate that CoNNTrA models use 32x less memory and have errors at par with the Backpropagation models.
arXiv Detail & Related papers (2020-09-01T16:20:11Z) - The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs
with Hybrid Parallelism [3.4377970608678314]
We present scalable hybrid-parallel algorithms for training large-scale 3D convolutional neural networks.
We evaluate our proposed training algorithms with two challenging 3D CNNs, CosmoFlow and 3D U-Net.
arXiv Detail & Related papers (2020-07-25T05:06:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.