Multi-Grade Deep Learning for Partial Differential Equations with
Applications to the Burgers Equation
- URL: http://arxiv.org/abs/2309.07401v1
- Date: Thu, 14 Sep 2023 03:09:58 GMT
- Title: Multi-Grade Deep Learning for Partial Differential Equations with
Applications to the Burgers Equation
- Authors: Yuesheng Xu and Taishan Zeng
- Abstract summary: We develop in this paper a multi-grade deep learning method for solving nonlinear partial differential equations (PDEs)
Deep neural networks (DNNs) have received super performance in solving PDEs.
implementation in this paper focuses only on the 1D, 2D, and 3D Burgers equations.
- Score: 3.5994228506864405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop in this paper a multi-grade deep learning method for solving
nonlinear partial differential equations (PDEs). Deep neural networks (DNNs)
have received super performance in solving PDEs in addition to their
outstanding success in areas such as natural language processing, computer
vision, and robotics. However, training a very deep network is often a
challenging task. As the number of layers of a DNN increases, solving a
large-scale non-convex optimization problem that results in the DNN solution of
PDEs becomes more and more difficult, which may lead to a decrease rather than
an increase in predictive accuracy. To overcome this challenge, we propose a
two-stage multi-grade deep learning (TS-MGDL) method that breaks down the task
of learning a DNN into several neural networks stacked on top of each other in
a staircase-like manner. This approach allows us to mitigate the complexity of
solving the non-convex optimization problem with large number of parameters and
learn residual components left over from previous grades efficiently. We prove
that each grade/stage of the proposed TS-MGDL method can reduce the value of
the loss function and further validate this fact through numerical experiments.
Although the proposed method is applicable to general PDEs, implementation in
this paper focuses only on the 1D, 2D, and 3D viscous Burgers equations.
Experimental results show that the proposed two-stage multi-grade deep learning
method enables efficient learning of solutions of the equations and outperforms
existing single-grade deep learning methods in predictive accuracy.
Specifically, the predictive errors of the single-grade deep learning are
larger than those of the TS-MGDL method in 26-60, 4-31 and 3-12 times, for the
1D, 2D, and 3D equations, respectively.
Related papers
- A forward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations [0.6040014326756179]
We present a novel forward differential deep learning-based algorithm for solving high-dimensional nonlinear backward differential equations (BSDEs)
Motivated by the fact that differential deep learning can efficiently approximate the labels and their derivatives with respect to inputs, we transform the BSDE problem into a differential deep learning problem.
The main idea of our algorithm is to discretize the integrals using the Euler-Maruyama method and approximate the unknown discrete solution triple using three deep neural networks.
arXiv Detail & Related papers (2024-08-10T19:34:03Z) - A backward differential deep learning-based algorithm for solving high-dimensional nonlinear backward stochastic differential equations [0.6040014326756179]
We propose a novel backward differential deep learning-based algorithm for solving high-dimensional nonlinear backward differential equations.
The deep neural network (DNN) models are trained not only on the inputs and labels but also the differentials of the corresponding labels.
arXiv Detail & Related papers (2024-04-12T13:05:35Z) - Accelerated primal-dual methods with enlarged step sizes and operator
learning for nonsmooth optimal control problems [3.1006429989273063]
We focus on the application of a primal-dual method, with which different types of variables can be treated individually.
For the accelerated primal-dual method with larger step sizes, its convergence can be proved rigorously while it numerically accelerates the original primal-dual method.
For the operator learning acceleration, we construct deep neural network surrogate models for the involved PDEs.
arXiv Detail & Related papers (2023-07-01T10:39:07Z) - iPINNs: Incremental learning for Physics-informed neural networks [66.4795381419701]
Physics-informed neural networks (PINNs) have recently become a powerful tool for solving partial differential equations (PDEs)
We propose incremental PINNs that can learn multiple tasks sequentially without additional parameters for new tasks and improve performance for every equation in the sequence.
Our approach learns multiple PDEs starting from the simplest one by creating its own subnetwork for each PDE and allowing each subnetwork to overlap with previously learnedworks.
arXiv Detail & Related papers (2023-04-10T20:19:20Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Solving Partial Differential Equations with Point Source Based on
Physics-Informed Neural Networks [33.18757454787517]
In recent years, deep learning technology has been used to solve partial differential equations (PDEs)
We propose a universal solution to tackle this problem with three novel techniques.
We evaluate the proposed method with three representative PDEs, and the experimental results show that our method outperforms existing deep learning-based methods with respect to the accuracy, the efficiency and the versatility.
arXiv Detail & Related papers (2021-11-02T06:39:54Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Adversarial Multi-task Learning Enhanced Physics-informed Neural
Networks for Solving Partial Differential Equations [9.823102211212582]
We introduce the novel approach of employing multi-task learning techniques, the uncertainty-weighting loss and the gradients surgery, in the context of learning PDE solutions.
In the experiments, our proposed methods are found to be effective and reduce the error on the unseen data points as compared to the previous approaches.
arXiv Detail & Related papers (2021-04-29T13:17:46Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.