Deep learning approaches to surrogates for solving the diffusion
equation for mechanistic real-world simulations
- URL: http://arxiv.org/abs/2102.05527v1
- Date: Wed, 10 Feb 2021 16:15:17 GMT
- Title: Deep learning approaches to surrogates for solving the diffusion
equation for mechanistic real-world simulations
- Authors: J. Quetzalc\'oatl Toledo-Mar\'in, Geoffrey Fox, James P. Sluka, James
A. Glazier
- Abstract summary: In medical, biological, physical and engineered models the numerical solution of partial differential equations (PDEs) can make simulations impractically slow.
Machine learning surrogates, neural networks trained to provide approximate solutions to such complicated numerical problems, can often provide speed-ups of several orders of magnitude compared to direct calculation.
We use a Convolutional Neural Network to approximate the stationary solution to the diffusion equation in the case of two equal-diameter, circular, constant-value sources.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many mechanistic medical, biological, physical and engineered
spatiotemporal dynamic models the numerical solution of partial differential
equations (PDEs) can make simulations impractically slow. Biological models
require the simultaneous calculation of the spatial variation of concentration
of dozens of diffusing chemical species. Machine learning surrogates, neural
networks trained to provide approximate solutions to such complicated numerical
problems, can often provide speed-ups of several orders of magnitude compared
to direct calculation. PDE surrogates enable use of larger models than are
possible with direct calculation and can make including such simulations in
real-time or near-real time workflows practical. Creating a surrogate requires
running the direct calculation tens of thousands of times to generate training
data and then training the neural network, both of which are computationally
expensive. We use a Convolutional Neural Network to approximate the stationary
solution to the diffusion equation in the case of two equal-diameter, circular,
constant-value sources located at random positions in a two-dimensional square
domain with absorbing boundary conditions. To improve convergence during
training, we apply a training approach that uses roll-back to reject stochastic
changes to the network that increase the loss function. The trained neural
network approximation is about 1e3 times faster than the direct calculation for
individual replicas. Because different applications will have different
criteria for acceptable approximation accuracy, we discuss a variety of loss
functions and accuracy estimators that can help select the best network for a
particular application.
Related papers
- Solving partial differential equations with sampled neural networks [1.8590821261905535]
Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering.
We discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges.
arXiv Detail & Related papers (2024-05-31T14:24:39Z) - Learning-based Multi-continuum Model for Multiscale Flow Problems [24.93423649301792]
We propose a learning-based multi-continuum model to enrich the homogenized equation and improve the accuracy of the single model for multiscale problems.
Our proposed learning-based multi-continuum model can resolve multiple interacted media within each coarse grid block and describe the mass transfer among them.
arXiv Detail & Related papers (2024-03-21T02:30:56Z) - PMNN:Physical Model-driven Neural Network for solving time-fractional
differential equations [17.66402435033991]
An innovative Physical Model-driven Neural Network (PMNN) method is proposed to solve time-fractional differential equations.
It effectively combines deep neural networks (DNNs) with approximation of fractional derivatives.
arXiv Detail & Related papers (2023-10-07T12:43:32Z) - Locally Regularized Neural Differential Equations: Some Black Boxes Were
Meant to Remain Closed! [3.222802562733787]
Implicit layer deep learning techniques, like Neural Differential Equations, have become an important modeling framework.
We develop two sampling strategies to trade off between performance and training time.
Our method reduces the number of function evaluations to 0.556-0.733x and accelerates predictions by 1.3-2x.
arXiv Detail & Related papers (2023-03-03T23:31:15Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.