A practical PINN framework for multi-scale problems with multi-magnitude
loss terms
- URL: http://arxiv.org/abs/2308.06672v2
- Date: Sun, 29 Oct 2023 16:22:20 GMT
- Title: A practical PINN framework for multi-scale problems with multi-magnitude
loss terms
- Authors: Yong Wang and Yanzhong Yao and Jiawei Guo and Zhiming Gao
- Abstract summary: We propose a practical deep learning framework for multi-scale problems using PINNs.
New PINN methods differ from the conventional PINN method mainly in two aspects.
The proposed methods significantly outperform the conventional PINN method in terms of computational efficiency and computational accuracy.
- Score: 3.8645424244172135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For multi-scale problems, the conventional physics-informed neural networks
(PINNs) face some challenges in obtaining available predictions. In this paper,
based on PINNs, we propose a practical deep learning framework for multi-scale
problems by reconstructing the loss function and associating it with special
neural network architectures. New PINN methods derived from the improved PINN
framework differ from the conventional PINN method mainly in two aspects.
First, the new methods use a novel loss function by modifying the standard loss
function through a (grouping) regularization strategy. The regularization
strategy implements a different power operation on each loss term so that all
loss terms composing the loss function are of approximately the same order of
magnitude, which makes all loss terms be optimized synchronously during the
optimization process. Second, for the multi-frequency or high-frequency
problems, in addition to using the modified loss function, new methods upgrade
the neural network architecture from the common fully-connected neural network
to special network architectures such as the Fourier feature architecture, and
the integrated architecture developed by us. The combination of the above two
techniques leads to a significant improvement in the computational accuracy of
multi-scale problems. Several challenging numerical examples demonstrate the
effectiveness of the proposed methods. The proposed methods not only
significantly outperform the conventional PINN method in terms of computational
efficiency and computational accuracy, but also compare favorably with the
state-of-the-art methods in the recent literature. The improved PINN framework
facilitates better application of PINNs to multi-scale problems.
Related papers
- A Primal-dual algorithm for image reconstruction with ICNNs [3.4797100095791706]
We address the optimization problem in a data-driven variational framework, where the regularizer is parameterized by an input- neural network (ICNN)
While gradient-based methods are commonly used to solve such problems, they struggle to effectively handle nonsmoothness.
We show that a proposed approach outperforms subgradient methods in terms of both speed and stability.
arXiv Detail & Related papers (2024-10-16T10:36:29Z) - Component-based Sketching for Deep ReLU Nets [55.404661149594375]
We develop a sketching scheme based on deep net components for various tasks.
We transform deep net training into a linear empirical risk minimization problem.
We show that the proposed component-based sketching provides almost optimal rates in approximating saturated functions.
arXiv Detail & Related papers (2024-09-21T15:30:43Z) - Physics-Informed Neural Networks with Trust-Region Sequential Quadratic Programming [4.557963624437784]
Recent research has noted that Physics-Informed Neural Networks (PINNs) may fail to learn relatively complex Partial Differential Equations (PDEs)
This paper addresses the failure modes of PINNs by introducing a novel, hard-constrained deep learning method -- trust-region Sequential Quadratic Programming (trSQP-PINN)
In contrast to directly training the penalized soft-constrained loss as in PINNs, our method performs a linear-quadratic approximation of the hard-constrained loss, while leveraging the soft-constrained loss to adaptively adjust the trust-region radius.
arXiv Detail & Related papers (2024-09-16T23:22:12Z) - ODE Discovery for Longitudinal Heterogeneous Treatment Effects Inference [69.24516189971929]
In this paper, we introduce a new type of solution in the longitudinal setting: a closed-form ordinary differential equation (ODE)
While we still rely on continuous optimization to learn an ODE, the resulting inference machine is no longer a neural network.
arXiv Detail & Related papers (2024-03-16T02:07:45Z) - PINNsFormer: A Transformer-Based Framework For Physics-Informed Neural Networks [22.39904196850583]
Physics-Informed Neural Networks (PINNs) have emerged as a promising deep learning framework for approximating numerical solutions to partial differential equations (PDEs)
We introduce a novel Transformer-based framework, termed PINNsFormer, designed to address this limitation.
PINNsFormer achieves superior generalization ability and accuracy across various scenarios, including PINNs failure modes and high-dimensional PDEs.
arXiv Detail & Related papers (2023-07-21T18:06:27Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable
domain decomposition approach for solving differential equations [20.277873724720987]
We propose a new, scalable approach for solving large problems relating to differential equations called Finite Basis PINNs (FBPINNs)
FBPINNs are inspired by classical finite element methods, where the solution of the differential equation is expressed as the sum of a finite set of basis functions with compact support.
In FBPINNs neural networks are used to learn these basis functions, which are defined over small, overlapping subdomain problems.
arXiv Detail & Related papers (2021-07-16T13:03:47Z) - Efficiently Solving High-Order and Nonlinear ODEs with Rational Fraction
Polynomial: the Ratio Net [3.155317790896023]
This study takes a different approach by introducing neural network architecture for constructing trial functions, known as ratio net.
Through empirical trials, it demonstrated that the proposed method exhibits higher efficiency compared to existing approaches.
The ratio net holds promise for advancing the efficiency and effectiveness of solving differential equations.
arXiv Detail & Related papers (2021-05-18T16:59:52Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.