Composing Partial Differential Equations with Physics-Aware Neural
Networks
- URL: http://arxiv.org/abs/2111.11798v1
- Date: Tue, 23 Nov 2021 11:27:13 GMT
- Title: Composing Partial Differential Equations with Physics-Aware Neural
Networks
- Authors: Matthias Karlbauer, Timothy Praditia, Sebastian Otte, Sergey
Oladyshkin, Wolfgang Nowak, and Martin V. Butz
- Abstract summary: We introduce a physics-aware neural network (FINN) for learning advection-diffusion processes.
With only one tenth of the number of parameters on average, FINN outperforms machine learning and other state-of-the-art physics-aware models.
- Score: 0.831246680772592
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce a compositional physics-aware neural network (FINN) for learning
spatiotemporal advection-diffusion processes. FINN implements a new way of
combining the learning abilities of artificial neural networks with physical
and structural knowledge from numerical simulation by modeling the constituents
of partial differential equations (PDEs) in a compositional manner. Results on
both one- and two-dimensional PDEs (Burger's, diffusion-sorption,
diffusion-reaction, Allen-Cahn) demonstrate FINN's superior modeling accuracy
and excellent out-of-distribution generalization ability beyond initial and
boundary conditions. With only one tenth of the number of parameters on
average, FINN outperforms pure machine learning and other state-of-the-art
physics-aware models in all cases -- often even by multiple orders of
magnitude. Moreover, FINN outperforms a calibrated physical model when
approximating sparse real-world data in a diffusion-sorption scenario,
confirming its generalization abilities and showing explanatory potential by
revealing the unknown retardation factor of the observed process.
Related papers
- Enhanced Spatiotemporal Prediction Using Physical-guided And Frequency-enhanced Recurrent Neural Networks [17.91230192726962]
This paper proposes a physical-guided neural network to estimate the Stemporal dynamics.
We also propose an adaptive second-order Runge-Kutta method with physical constraints to model the physical states more precisely.
Our model outperforms state-of-the-art methods and performs best in datasets, with a much smaller parameter count.
arXiv Detail & Related papers (2024-05-23T12:39:49Z) - Automatic Differentiation is Essential in Training Neural Networks for Solving Differential Equations [7.890817997914349]
Neural network-based approaches have recently shown significant promise in solving partial differential equations (PDEs) in science and engineering.
One advantage of the neural network methods for PDEs lies in its automatic differentiation (AD)
In this paper, we quantitatively demonstrate the advantage of AD in training neural networks.
arXiv Detail & Related papers (2024-05-23T02:01:05Z) - Numerical analysis of physics-informed neural networks and related
models in physics-informed machine learning [18.1180892910779]
Physics-informed neural networks (PINNs) have been very popular in recent years as algorithms for the numerical simulation of both forward and inverse problems for partial differential equations.
We provide a unified framework in which analysis of the various components of the error incurred by PINNs in approximating PDEs can be effectively carried out.
arXiv Detail & Related papers (2024-01-30T10:43:27Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Generalized Neural Closure Models with Interpretability [28.269731698116257]
We develop a novel and versatile methodology of unified neural partial delay differential equations.
We augment existing/low-fidelity dynamical models directly in their partial differential equation (PDE) forms with both Markovian and non-Markovian neural network (NN) closure parameterizations.
We demonstrate the new generalized neural closure models (gnCMs) framework using four sets of experiments based on advecting nonlinear waves, shocks, and ocean acidification models.
arXiv Detail & Related papers (2023-01-15T21:57:43Z) - Transfer Learning with Physics-Informed Neural Networks for Efficient
Simulation of Branched Flows [1.1470070927586016]
Physics-Informed Neural Networks (PINNs) offer a promising approach to solving differential equations.
We adopt a recently developed transfer learning approach for PINNs and introduce a multi-head model.
We show that our methods provide significant computational speedups in comparison to standard PINNs trained from scratch.
arXiv Detail & Related papers (2022-11-01T01:50:00Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid
Flow Prediction [79.81193813215872]
We develop a hybrid (graph) neural network that combines a traditional graph convolutional network with an embedded differentiable fluid dynamics simulator inside the network itself.
We show that we can both generalize well to new situations and benefit from the substantial speedup of neural network CFD predictions.
arXiv Detail & Related papers (2020-07-08T21:23:19Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.