TSONN: Time-stepping-oriented neural network for solving partial
differential equations
- URL: http://arxiv.org/abs/2310.16491v1
- Date: Wed, 25 Oct 2023 09:19:40 GMT
- Title: TSONN: Time-stepping-oriented neural network for solving partial
differential equations
- Authors: Wenbo Cao, Weiwei Zhang
- Abstract summary: This work integrates time-stepping method with deep learning to solve PDE problems.
The convergence of model training is significantly improved by following the trajectory of the pseudo time-stepping process.
Our results show that the proposed method achieves stable training and correct results in many problems that standard PINNs fail to solve.
- Score: 1.9061608251056779
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs), especially physics-informed neural networks
(PINNs), have recently become a new popular method for solving forward and
inverse problems governed by partial differential equations (PDEs). However,
these methods still face challenges in achieving stable training and obtaining
correct results in many problems, since minimizing PDE residuals with PDE-based
soft constraint make the problem ill-conditioned. Different from all existing
methods that directly minimize PDE residuals, this work integrates
time-stepping method with deep learning, and transforms the original
ill-conditioned optimization problem into a series of well-conditioned
sub-problems over given pseudo time intervals. The convergence of model
training is significantly improved by following the trajectory of the pseudo
time-stepping process, yielding a robust optimization-based PDE solver. Our
results show that the proposed method achieves stable training and correct
results in many problems that standard PINNs fail to solve, requiring only a
simple modification on the loss function. In addition, we demonstrate several
novel properties and advantages of time-stepping methods within the framework
of neural network-based optimization approach, in comparison to traditional
grid-based numerical method. Specifically, explicit scheme allows significantly
larger time step, while implicit scheme can be implemented as straightforwardly
as explicit scheme.
Related papers
- FEM-based Neural Networks for Solving Incompressible Fluid Flows and Related Inverse Problems [41.94295877935867]
numerical simulation and optimization of technical systems described by partial differential equations is expensive.
A comparatively new approach in this context is to combine the good approximation properties of neural networks with the classical finite element method.
In this paper, we extend this approach to saddle-point and non-linear fluid dynamics problems, respectively.
arXiv Detail & Related papers (2024-09-06T07:17:01Z) - Constrained or Unconstrained? Neural-Network-Based Equation Discovery from Data [0.0]
We represent the PDE as a neural network and use an intermediate state representation similar to a Physics-Informed Neural Network (PINN)
We present a penalty method and a widely used trust-region barrier method to solve this constrained optimization problem.
Our results on the Burgers' and the Korteweg-De Vreis equations demonstrate that the latter constrained method outperforms the penalty method.
arXiv Detail & Related papers (2024-05-30T01:55:44Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Exact Enforcement of Temporal Continuity in Sequential Physics-Informed
Neural Networks [0.0]
We introduce a method to enforce continuity between successive time segments via a solution ansatz.
The method is tested for a number of benchmark problems involving both linear and non-linear PDEs.
The numerical experiments conducted with the proposed method demonstrated superior convergence and accuracy over both traditional PINNs and the soft-constrained counterparts.
arXiv Detail & Related papers (2024-02-15T17:41:02Z) - Meta-learning of Physics-informed Neural Networks for Efficiently
Solving Newly Given PDEs [33.072056425485115]
We propose a neural network-based meta-learning method to efficiently solve partial differential equation (PDE) problems.
The proposed method is designed to meta-learn how to solve a wide variety of PDE problems, and uses the knowledge for solving newly given PDE problems.
We demonstrate that our proposed method outperforms existing methods in predicting solutions of PDE problems.
arXiv Detail & Related papers (2023-10-20T04:35:59Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.