Autoregressive Renaissance in Neural PDE Solvers
- URL: http://arxiv.org/abs/2310.19763v1
- Date: Mon, 30 Oct 2023 17:35:26 GMT
- Title: Autoregressive Renaissance in Neural PDE Solvers
- Authors: Yolanne Yi Ran Lee
- Abstract summary: A paper published in ICLR 2022 revisits autoregressive models and designs a message passing graph neural network.
This blog post delves into the key contributions of this work, exploring the strategies used to address the common problem of instability in autoregressive models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments in the field of neural partial differential equation
(PDE) solvers have placed a strong emphasis on neural operators. However, the
paper "Message Passing Neural PDE Solver" by Brandstetter et al. published in
ICLR 2022 revisits autoregressive models and designs a message passing graph
neural network that is comparable with or outperforms both the state-of-the-art
Fourier Neural Operator and traditional classical PDE solvers in its
generalization capabilities and performance. This blog post delves into the key
contributions of this work, exploring the strategies used to address the common
problem of instability in autoregressive models and the design choices of the
message passing graph neural network architecture.
Related papers
- First-order PDES for Graph Neural Networks: Advection And Burgers Equation Models [1.4174475093445238]
This paper presents new Graph Neural Network models that incorporate two first-order Partial Differential Equations (PDEs)
Our experimental findings highlight the capacity of our new PDE model to achieve comparable results with higher-order PDE models and fix the over-smoothing problem up to 64 layers.
Results underscore the adaptability and versatility of GNNs, indicating that unconventional approaches can yield outcomes on par with established techniques.
arXiv Detail & Related papers (2024-04-03T21:47:02Z) - Neural Fractional Differential Equations [2.812395851874055]
Fractional Differential Equations (FDEs) are essential tools for modelling complex systems in science and engineering.
We propose the Neural FDE, a novel deep neural network architecture that adjusts a FDE to the dynamics of data.
arXiv Detail & Related papers (2024-03-05T07:45:29Z) - Coupling Graph Neural Networks with Fractional Order Continuous
Dynamics: A Robustness Study [24.950680319986486]
We rigorously investigate the robustness of graph neural fractional-order differential equation (FDE) models.
This framework extends beyond traditional graph neural (integer-order) ordinary differential equation (ODE) models by implementing the time-fractional Caputo derivative.
arXiv Detail & Related papers (2024-01-09T02:56:52Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - GNRK: Graph Neural Runge-Kutta method for solving partial differential
equations [0.0]
This study introduces a novel approach called Graph Neural Runge-Kutta (GNRK)
GNRK integrates graph neural network modules with a recurrent structure inspired by the classical solvers.
It demonstrates the capability to address general PDEs, irrespective of initial conditions or PDE coefficients.
arXiv Detail & Related papers (2023-10-01T08:52:46Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - On the Robustness of Graph Neural Diffusion to Topology Perturbations [30.284359808863588]
We show that graph neural PDEs are intrinsically more robust against topology perturbation as compared to other GNNs.
We propose a general graph neural PDE framework based on which a new class of robust GNNs can be defined.
arXiv Detail & Related papers (2022-09-16T07:19:35Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.