DOSnet as a Non-Black-Box PDE Solver: When Deep Learning Meets Operator
Splitting
- URL: http://arxiv.org/abs/2212.05571v1
- Date: Sun, 11 Dec 2022 18:23:56 GMT
- Title: DOSnet as a Non-Black-Box PDE Solver: When Deep Learning Meets Operator
Splitting
- Authors: Yuan Lan, Zhen Li, Jie Sun, Yang Xiang
- Abstract summary: We develop a learning-based PDE solver, which we name Deep Operator-Splitting Network (DOSnet)
DOSnet is constructed from the physical rules and operators governing the underlying dynamics contains learnable parameters.
We train and validate it on several types of operator-decomposable differential equations.
- Score: 12.655884541938656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) recently emerged as a promising tool for
analyzing and solving complex differential equations arising in science and
engineering applications. Alternative to traditional numerical schemes,
learning-based solvers utilize the representation power of DNNs to approximate
the input-output relations in an automated manner. However, the lack of
physics-in-the-loop often makes it difficult to construct a neural network
solver that simultaneously achieves high accuracy, low computational burden,
and interpretability. In this work, focusing on a class of evolutionary PDEs
characterized by having decomposable operators, we show that the classical
``operator splitting'' numerical scheme of solving these equations can be
exploited to design neural network architectures. This gives rise to a
learning-based PDE solver, which we name Deep Operator-Splitting Network
(DOSnet). Such non-black-box network design is constructed from the physical
rules and operators governing the underlying dynamics contains learnable
parameters, and is thus more flexible than the standard operator splitting
scheme. Once trained, it enables the fast solution of the same type of PDEs. To
validate the special structure inside DOSnet, we take the linear PDEs as the
benchmark and give the mathematical explanation for the weight behavior.
Furthermore, to demonstrate the advantages of our new AI-enhanced PDE solver,
we train and validate it on several types of operator-decomposable differential
equations. We also apply DOSnet to nonlinear Schr\"odinger equations (NLSE)
which have important applications in the signal processing for modern optical
fiber transmission systems, and experimental results show that our model has
better accuracy and lower computational complexity than numerical schemes and
the baseline DNNs.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - GIT-Net: Generalized Integral Transform for Operator Learning [58.13313857603536]
This article introduces GIT-Net, a deep neural network architecture for approximating Partial Differential Equation (PDE) operators.
GIT-Net harnesses the fact that differential operators commonly used for defining PDEs can often be represented parsimoniously when expressed in specialized functional bases.
Numerical experiments demonstrate that GIT-Net is a competitive neural network operator, exhibiting small test errors and low evaluations across a range of PDE problems.
arXiv Detail & Related papers (2023-12-05T03:03:54Z) - Hypernetwork-based Meta-Learning for Low-Rank Physics-Informed Neural
Networks [24.14254861023394]
In this study, we suggest a path that potentially opens up a possibility for physics-informed neural networks (PINNs) to be considered as one such solver.
PINNs have pioneered a proper integration of deep-learning and scientific computing, but they require repetitive time-consuming training of neural networks.
We propose a lightweight low-rank PINNs containing only hundreds of model parameters and an associated hypernetwork-based meta-learning algorithm.
arXiv Detail & Related papers (2023-10-14T08:13:43Z) - Mixed formulation of physics-informed neural networks for
thermo-mechanically coupled systems and heterogeneous domains [0.0]
Physics-informed neural networks (PINNs) are a new tool for solving boundary value problems.
Recent investigations have shown that when designing loss functions for many engineering problems, using first-order derivatives and combining equations from both strong and weak forms can lead to much better accuracy.
In this work, we propose applying the mixed formulation to solve multi-physical problems, specifically a stationary thermo-mechanically coupled system of equations.
arXiv Detail & Related papers (2023-02-09T21:56:59Z) - Convolutional Neural Operators for robust and accurate learning of PDEs [11.562748612983956]
We present novel adaptations for convolutional neural networks to process functions as inputs and outputs.
The resulting architecture is termed as convolutional neural operators (CNOs)
We prove a universality theorem to show that CNOs can approximate operators arising in PDEs to desired accuracy.
arXiv Detail & Related papers (2023-02-02T15:54:45Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - PhyGNNet: Solving spatiotemporal PDEs with Physics-informed Graph Neural
Network [12.385926494640932]
We propose PhyGNNet for solving partial differential equations on the basics of a graph neural network.
In particular, we divide the computing area into regular grids, define partial differential operators on the grids, then construct pde loss for the network to optimize to build PhyGNNet model.
arXiv Detail & Related papers (2022-08-07T13:33:34Z) - LordNet: An Efficient Neural Network for Learning to Solve Parametric Partial Differential Equations without Simulated Data [47.49194807524502]
We propose LordNet, a tunable and efficient neural network for modeling entanglements.
The experiments on solving Poisson's equation and (2D and 3D) Navier-Stokes equation demonstrate that the long-range entanglements can be well modeled by the LordNet.
arXiv Detail & Related papers (2022-06-19T14:41:08Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.