Adversarial Multi-task Learning Enhanced Physics-informed Neural
Networks for Solving Partial Differential Equations
- URL: http://arxiv.org/abs/2104.14320v1
- Date: Thu, 29 Apr 2021 13:17:46 GMT
- Title: Adversarial Multi-task Learning Enhanced Physics-informed Neural
Networks for Solving Partial Differential Equations
- Authors: Pongpisit Thanasutives, Ken-ichi Fukui, Masayuki Numao
- Abstract summary: We introduce the novel approach of employing multi-task learning techniques, the uncertainty-weighting loss and the gradients surgery, in the context of learning PDE solutions.
In the experiments, our proposed methods are found to be effective and reduce the error on the unseen data points as compared to the previous approaches.
- Score: 9.823102211212582
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, researchers have utilized neural networks to accurately solve
partial differential equations (PDEs), enabling the mesh-free method for
scientific computation. Unfortunately, the network performance drops when
encountering a high nonlinearity domain. To improve the generalizability, we
introduce the novel approach of employing multi-task learning techniques, the
uncertainty-weighting loss and the gradients surgery, in the context of
learning PDE solutions. The multi-task scheme exploits the benefits of learning
shared representations, controlled by cross-stitch modules, between multiple
related PDEs, which are obtainable by varying the PDE parameterization
coefficients, to generalize better on the original PDE. Encouraging the network
pay closer attention to the high nonlinearity domain regions that are more
challenging to learn, we also propose adversarial training for generating
supplementary high-loss samples, similarly distributed to the original training
distribution. In the experiments, our proposed methods are found to be
effective and reduce the error on the unseen data points as compared to the
previous approaches in various PDE examples, including high-dimensional
stochastic PDEs.
Related papers
- Constrained or Unconstrained? Neural-Network-Based Equation Discovery from Data [0.0]
We represent the PDE as a neural network and use an intermediate state representation similar to a Physics-Informed Neural Network (PINN)
We present a penalty method and a widely used trust-region barrier method to solve this constrained optimization problem.
Our results on the Burgers' and the Korteweg-De Vreis equations demonstrate that the latter constrained method outperforms the penalty method.
arXiv Detail & Related papers (2024-05-30T01:55:44Z) - Automatic Differentiation is Essential in Training Neural Networks for Solving Differential Equations [7.890817997914349]
Neural network-based approaches have recently shown significant promise in solving partial differential equations (PDEs) in science and engineering.
One advantage of the neural network methods for PDEs lies in its automatic differentiation (AD)
In this paper, we quantitatively demonstrate the advantage of AD in training neural networks.
arXiv Detail & Related papers (2024-05-23T02:01:05Z) - Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs [85.40198664108624]
We propose Codomain Attention Neural Operator (CoDA-NO) to solve multiphysics problems with PDEs.
CoDA-NO tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems.
We find CoDA-NO to outperform existing methods by over 36% on complex downstream tasks with limited data.
arXiv Detail & Related papers (2024-03-19T08:56:20Z) - Solutions to Elliptic and Parabolic Problems via Finite Difference Based Unsupervised Small Linear Convolutional Neural Networks [1.124958340749622]
We propose a fully unsupervised approach, requiring no training data, to estimate finite difference solutions for PDEs directly via small linear convolutional neural networks.
Our proposed approach uses substantially fewer parameters than similar finite difference-based approaches while also demonstrating comparable accuracy to the true solution for several selected elliptic and parabolic problems.
arXiv Detail & Related papers (2023-11-01T03:15:10Z) - PDE+: Enhancing Generalization via PDE with Adaptive Distributional
Diffusion [66.95761172711073]
generalization of neural networks is a central challenge in machine learning.
We propose to enhance it directly through the underlying function of neural networks, rather than focusing on adjusting input data.
We put this theoretical framework into practice as $textbfPDE+$ ($textbfPDE$ with $textbfA$daptive $textbfD$istributional $textbfD$iffusion)
arXiv Detail & Related papers (2023-05-25T08:23:26Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Neural Partial Differential Equations with Functional Convolution [30.35306295442881]
We present a lightweighted neural PDE representation to discover the hidden structure and predict the solution of different nonlinear PDEs.
We leverage the prior of translational similarity'' of numerical PDE differential operators to drastically reduce the scale of learning model and training data.
arXiv Detail & Related papers (2023-03-10T04:25:38Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - TransNet: Transferable Neural Networks for Partial Differential
Equations [14.15250342406011]
Existing transfer learning approaches require much information of the target PDEs such as its formulation and/or data of its solution for pre-training.
We propose to construct transferable neural feature spaces from purely function approximation perspectives without using PDE information.
arXiv Detail & Related papers (2023-01-27T13:26:25Z) - Learning differentiable solvers for systems with hard constraints [48.54197776363251]
We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs)
We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture.
Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
arXiv Detail & Related papers (2022-07-18T15:11:43Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.