STENCIL-NET: Data-driven solution-adaptive discretization of partial
differential equations
- URL: http://arxiv.org/abs/2101.06182v2
- Date: Mon, 18 Jan 2021 10:31:17 GMT
- Title: STENCIL-NET: Data-driven solution-adaptive discretization of partial
differential equations
- Authors: Suryanarayana Maddu, Dominik Sturm, Bevan L. Cheeseman, Christian L.
M\"uller, Ivo F. Sbalzarini
- Abstract summary: We present STENCIL-NET, an artificial neural network architecture for data-driven learning of problem- and resolution-specific local discretizations of nonlinear PDEs.
Knowing the actual PDE is not necessary, as solution data is sufficient to train the network to learn the discrete operators.
A once-trained STENCIL-NET model can be used to predict solutions of the PDE on larger domains and for longer times than it was trained for.
- Score: 2.362412515574206
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Numerical methods for approximately solving partial differential equations
(PDE) are at the core of scientific computing. Often, this requires
high-resolution or adaptive discretization grids to capture relevant
spatio-temporal features in the PDE solution, e.g., in applications like
turbulence, combustion, and shock propagation. Numerical approximation also
requires knowing the PDE in order to construct problem-specific
discretizations. Systematically deriving such solution-adaptive discrete
operators, however, is a current challenge. Here we present STENCIL-NET, an
artificial neural network architecture for data-driven learning of problem- and
resolution-specific local discretizations of nonlinear PDEs. STENCIL-NET
achieves numerically stable discretization of the operators in an unknown
nonlinear PDE by spatially and temporally adaptive parametric pooling on
regular Cartesian grids, and by incorporating knowledge about discrete time
integration. Knowing the actual PDE is not necessary, as solution data is
sufficient to train the network to learn the discrete operators. A once-trained
STENCIL-NET model can be used to predict solutions of the PDE on larger spatial
domains and for longer times than it was trained for, hence addressing the
problem of PDE-constrained extrapolation from data. To support this claim, we
present numerical experiments on long-term forecasting of chaotic PDE solutions
on coarse spatio-temporal grids. We also quantify the speed-up achieved by
substituting base-line numerical methods with equation-free STENCIL-NET
predictions on coarser grids with little compromise on accuracy.
Related papers
- Adaptation of uncertainty-penalized Bayesian information criterion for parametric partial differential equation discovery [1.1049608786515839]
We introduce an extension of the uncertainty-penalized Bayesian information criterion (UBIC) to solve parametric PDE discovery problems efficiently.
UBIC uses quantified PDE uncertainty over different temporal or spatial points to prevent overfitting in model selection.
We show that our extended UBIC can identify the true number of terms and their varying coefficients accurately, even in the presence of noise.
arXiv Detail & Related papers (2024-08-15T12:10:50Z) - Solving partial differential equations with sampled neural networks [1.8590821261905535]
Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering.
We discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges.
arXiv Detail & Related papers (2024-05-31T14:24:39Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Meta-PDE: Learning to Solve PDEs Quickly Without a Mesh [24.572840023107574]
Partial differential equations (PDEs) are often computationally challenging to solve.
We present a meta-learning based method which learns to rapidly solve problems from a distribution of related PDEs.
arXiv Detail & Related papers (2022-11-03T06:17:52Z) - High Precision Differentiation Techniques for Data-Driven Solution of
Nonlinear PDEs by Physics-Informed Neural Networks [0.0]
Time-dependent Partial Differential Equations with given initial conditions are considered in this paper.
New differentiation techniques of the unknown solution with respect to time variable are proposed.
arXiv Detail & Related papers (2022-10-02T13:36:01Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - PhyCRNet: Physics-informed Convolutional-Recurrent Network for Solving
Spatiotemporal PDEs [8.220908558735884]
Partial differential equations (PDEs) play a fundamental role in modeling and simulating problems across a wide range of disciplines.
Recent advances in deep learning have shown the great potential of physics-informed neural networks (NNs) to solve PDEs as a basis for data-driven inverse analysis.
We propose the novel physics-informed convolutional-recurrent learning architectures (PhyCRNet and PhCRyNet-s) for solving PDEs without any labeled data.
arXiv Detail & Related papers (2021-06-26T22:22:19Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.