FiniteNet: A Fully Convolutional LSTM Network Architecture for
Time-Dependent Partial Differential Equations
- URL: http://arxiv.org/abs/2002.03014v1
- Date: Fri, 7 Feb 2020 21:18:46 GMT
- Title: FiniteNet: A Fully Convolutional LSTM Network Architecture for
Time-Dependent Partial Differential Equations
- Authors: Ben Stevens, Tim Colonius
- Abstract summary: We use a fully convolutional LSTM network to exploit the dynamics of PDEs.
We show that our network can reduce error by a factor of 2 to 3 compared to the baseline algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present a machine learning approach for reducing the error
when numerically solving time-dependent partial differential equations (PDE).
We use a fully convolutional LSTM network to exploit the spatiotemporal
dynamics of PDEs. The neural network serves to enhance finite-difference and
finite-volume methods (FDM/FVM) that are commonly used to solve PDEs, allowing
us to maintain guarantees on the order of convergence of our method. We train
the network on simulation data, and show that our network can reduce error by a
factor of 2 to 3 compared to the baseline algorithms. We demonstrate our method
on three PDEs that each feature qualitatively different dynamics. We look at
the linear advection equation, which propagates its initial conditions at a
constant speed, the inviscid Burgers' equation, which develops shockwaves, and
the Kuramoto-Sivashinsky (KS) equation, which is chaotic.
Related papers
- Trajectory Flow Matching with Applications to Clinical Time Series Modeling [77.58277281319253]
Trajectory Flow Matching (TFM) trains a Neural SDE in a simulation-free manner, bypassing backpropagation through the dynamics.
We demonstrate improved performance on three clinical time series datasets in terms of absolute performance and uncertainty prediction.
arXiv Detail & Related papers (2024-10-28T15:54:50Z) - Solving partial differential equations with sampled neural networks [1.8590821261905535]
Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering.
We discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges.
arXiv Detail & Related papers (2024-05-31T14:24:39Z) - Time integration schemes based on neural networks for solving partial
differential equations on coarse grids [0.0]
We focus on the learning of 3-step linear multistep methods to solve partial differential equations.
We show that the prediction error of the learned fully-constrained scheme is close to that of the Runge-Kutta method and Adams-Bashforth method.
Compared to the traditional methods, the learned unconstrained and semi-constrained schemes significantly reduce the prediction error on coarse grids.
arXiv Detail & Related papers (2023-10-16T11:43:08Z) - Learning Subgrid-scale Models with Neural Ordinary Differential
Equations [0.39160947065896795]
We propose a new approach to learning the subgrid-scale model when simulating partial differential equations (PDEs)
In this approach neural networks are used to learn the coarse- to fine-grid map, which can be viewed as subgrid-scale parameterization.
Our method inherits the advantages of NODEs and can be used to parameterize subgrid scales, approximate coupling operators, and improve the efficiency of low-order solvers.
arXiv Detail & Related papers (2022-12-20T02:45:09Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Actor-Critic Algorithm for High-dimensional Partial Differential
Equations [1.5644600570264835]
We develop a deep learning model to solve high-dimensional nonlinear parabolic partial differential equations.
The Markovian property of the BSDE is utilized in designing our neural network architecture.
We demonstrate those improvements by solving a few well-known classes of PDEs.
arXiv Detail & Related papers (2020-10-07T20:53:24Z) - Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid
Flow Prediction [79.81193813215872]
We develop a hybrid (graph) neural network that combines a traditional graph convolutional network with an embedded differentiable fluid dynamics simulator inside the network itself.
We show that we can both generalize well to new situations and benefit from the substantial speedup of neural network CFD predictions.
arXiv Detail & Related papers (2020-07-08T21:23:19Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.