Multipole Graph Neural Operator for Parametric Partial Differential
Equations
- URL: http://arxiv.org/abs/2006.09535v2
- Date: Mon, 19 Oct 2020 20:28:04 GMT
- Title: Multipole Graph Neural Operator for Parametric Partial Differential
Equations
- Authors: Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu,
Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar
- Abstract summary: One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
- Score: 57.90284928158383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the main challenges in using deep learning-based methods for
simulating physical systems and solving partial differential equations (PDEs)
is formulating physics-based data in the desired structure for neural networks.
Graph neural networks (GNNs) have gained popularity in this area since graphs
offer a natural way of modeling particle interactions and provide a clear way
of discretizing the continuum models. However, the graphs constructed for
approximating such tasks usually ignore long-range interactions due to
unfavorable scaling of the computational complexity with respect to the number
of nodes. The errors due to these approximations scale with the discretization
of the system, thereby not allowing for generalization under mesh-refinement.
Inspired by the classical multipole methods, we propose a novel multi-level
graph neural network framework that captures interaction at all ranges with
only linear complexity. Our multi-level formulation is equivalent to
recursively adding inducing points to the kernel matrix, unifying GNNs with
multi-resolution matrix factorization of the kernel. Experiments confirm our
multi-graph network learns discretization-invariant solution operators to PDEs
and can be evaluated in linear time.
Related papers
- Learning-based Multi-continuum Model for Multiscale Flow Problems [24.93423649301792]
We propose a learning-based multi-continuum model to enrich the homogenized equation and improve the accuracy of the single model for multiscale problems.
Our proposed learning-based multi-continuum model can resolve multiple interacted media within each coarse grid block and describe the mass transfer among them.
arXiv Detail & Related papers (2024-03-21T02:30:56Z) - Solving the Discretised Multiphase Flow Equations with Interface
Capturing on Structured Grids Using Machine Learning Libraries [0.6299766708197884]
This paper solves the discretised multiphase flow equations using tools and methods from machine-learning libraries.
For the first time, finite element discretisations of multiphase flows can be solved using an approach based on (untrained) convolutional neural networks.
arXiv Detail & Related papers (2024-01-12T18:42:42Z) - Accelerated Solutions of Coupled Phase-Field Problems using Generative
Adversarial Networks [0.0]
We develop a new neural network based framework that uses encoder-decoder based conditional GeneLSTM layers to solve a system of Cahn-Hilliard microstructural equations.
We show that the trained models are mesh and scale-independent, thereby warranting application as effective neural operators.
arXiv Detail & Related papers (2022-11-22T08:32:22Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.