Auxiliary-Tasks Learning for Physics-Informed Neural Network-Based
Partial Differential Equations Solving
- URL: http://arxiv.org/abs/2307.06167v1
- Date: Wed, 12 Jul 2023 13:46:40 GMT
- Title: Auxiliary-Tasks Learning for Physics-Informed Neural Network-Based
Partial Differential Equations Solving
- Authors: Junjun Yan, Xinhai Chen, Zhichao Wang, Enqiang Zhou and Jie Liu
- Abstract summary: Physics-informed neural networks (PINNs) have emerged as promising surrogate modes for solving partial differential equations (PDEs)
We propose auxiliary-task learning-based ATL-PINNs, which provide four different auxiliary-task learning modes.
Our findings show that the proposed auxiliary-task learning modes can significantly improve solution accuracy, achieving a maximum performance boost of 96.62%.
- Score: 13.196871939441273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-informed neural networks (PINNs) have emerged as promising surrogate
modes for solving partial differential equations (PDEs). Their effectiveness
lies in the ability to capture solution-related features through neural
networks. However, original PINNs often suffer from bottlenecks, such as low
accuracy and non-convergence, limiting their applicability in complex physical
contexts. To alleviate these issues, we proposed auxiliary-task learning-based
physics-informed neural networks (ATL-PINNs), which provide four different
auxiliary-task learning modes and investigate their performance compared with
original PINNs. We also employ the gradient cosine similarity algorithm to
integrate auxiliary problem loss with the primary problem loss in ATL-PINNs,
which aims to enhance the effectiveness of the auxiliary-task learning modes.
To the best of our knowledge, this is the first study to introduce
auxiliary-task learning modes in the context of physics-informed learning. We
conduct experiments on three PDE problems across different fields and
scenarios. Our findings demonstrate that the proposed auxiliary-task learning
modes can significantly improve solution accuracy, achieving a maximum
performance boost of 96.62% (averaging 28.23%) compared to the original
single-task PINNs. The code and dataset are open source at
https://github.com/junjun-yan/ATL-PINN.
Related papers
- Aux-NAS: Exploiting Auxiliary Labels with Negligibly Extra Inference Cost [73.28626942658022]
We aim at exploiting additional auxiliary labels from an independent (auxiliary) task to boost the primary task performance.
Our method is architecture-based with a flexible asymmetric structure for the primary and auxiliary tasks.
Experiments with six tasks on NYU v2, CityScapes, and Taskonomy datasets using VGG, ResNet, and ViT backbones validate the promising performance.
arXiv Detail & Related papers (2024-05-09T11:50:19Z) - Operator Learning Enhanced Physics-informed Neural Networks for Solving
Partial Differential Equations Characterized by Sharp Solutions [10.999971808508437]
We propose a novel framework termed Operator Learning Enhanced Physics-informed Neural Networks (OL-PINN)
The proposed method requires only a small number of residual points to achieve a strong generalization capability.
It substantially enhances accuracy, while also ensuring a robust training process.
arXiv Detail & Related papers (2023-10-30T14:47:55Z) - Training Physics-Informed Neural Networks via Multi-Task Optimization
for Traffic Density Prediction [3.3823703740215865]
Physics-informed neural networks (PINNs) are a newly emerging research frontier in machine learning.
We propose a new PINN training framework based on the multi-task optimization (MTO) paradigm.
We implement the proposed framework and apply it to train the PINN for addressing the traffic density prediction problem.
arXiv Detail & Related papers (2023-07-08T07:11:52Z) - Ensemble learning for Physics Informed Neural Networks: a Gradient Boosting approach [10.250994619846416]
We present a new training paradigm referred to as "gradient boosting" (GB)
Instead of learning the solution of a given PDE using a single neural network directly, our algorithm employs a sequence of neural networks to achieve a superior outcome.
This work also unlocks the door to employing ensemble learning techniques in PINNs.
arXiv Detail & Related papers (2023-02-25T19:11:44Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Multigoal-oriented dual-weighted-residual error estimation using deep
neural networks [0.0]
Deep learning is considered as a powerful tool with high flexibility to approximate functions.
Our approach is based on a posteriori error estimation in which the adjoint problem is solved for the error localization.
An efficient and easy to implement algorithm is developed to obtain a posteriori error estimate for multiple goal functionals.
arXiv Detail & Related papers (2021-12-21T16:59:44Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Training multi-objective/multi-task collocation physics-informed neural
network with student/teachers transfer learnings [0.0]
This paper presents a PINN training framework that employs pre-training steps and a net-to-net knowledge transfer algorithm.
A multi-objective optimization algorithm may improve the performance of a physical-informed neural network with competing constraints.
arXiv Detail & Related papers (2021-07-24T00:43:17Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - DEPARA: Deep Attribution Graph for Deep Knowledge Transferability [91.06106524522237]
We propose the DEeP Attribution gRAph (DEPARA) to investigate the transferability of knowledge learned from PR-DNNs.
In DEPARA, nodes correspond to the inputs and are represented by their vectorized attribution maps with regards to the outputs of the PR-DNN.
The knowledge transferability of two PR-DNNs is measured by the similarity of their corresponding DEPARAs.
arXiv Detail & Related papers (2020-03-17T02:07:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.