Initialization-enhanced Physics-Informed Neural Network with Domain Decomposition (IDPINN)
- URL: http://arxiv.org/abs/2406.03172v1
- Date: Wed, 5 Jun 2024 12:03:45 GMT
- Title: Initialization-enhanced Physics-Informed Neural Network with Domain Decomposition (IDPINN)
- Authors: Chenhao Si, Ming Yan,
- Abstract summary: We propose a new physics-informed neural network framework, IDPINN, to improve prediction accuracy.
We numerically evaluated it on several forward problems and demonstrated the benefits of IDPINN in terms of accuracy.
- Score: 14.65008276932511
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new physics-informed neural network framework, IDPINN, based on the enhancement of initialization and domain decomposition to improve prediction accuracy. We train a PINN using a small dataset to obtain an initial network structure, including the weighted matrix and bias, which initializes the PINN for each subdomain. Moreover, we leverage the smoothness condition on the interface to enhance the prediction performance. We numerically evaluated it on several forward problems and demonstrated the benefits of IDPINN in terms of accuracy.
Related papers
- Parallel-in-Time Solutions with Random Projection Neural Networks [0.07282584715927627]
This paper considers one of the fundamental parallel-in-time methods for the solution of ordinary differential equations, Parareal, and extends it by adopting a neural network as a coarse propagator.
We provide a theoretical analysis of the convergence properties of the proposed algorithm and show its effectiveness for several examples, including Lorenz and Burgers' equations.
arXiv Detail & Related papers (2024-08-19T07:32:41Z) - Accelerating Full Waveform Inversion By Transfer Learning [1.0881446298284452]
Full waveform inversion (FWI) is a powerful tool for reconstructing material fields based on sparsely measured data obtained by wave propagation.
For specific problems, discretizing the material field with a neural network (NN) improves the robustness and reconstruction quality of the corresponding optimization problem.
In this paper, we introduce a novel transfer learning approach to further improve NN-based FWI.
arXiv Detail & Related papers (2024-08-01T16:39:06Z) - Improved physics-informed neural network in mitigating gradient related failures [11.356695216531328]
Physics-informed neural networks (PINNs) integrate fundamental physical principles with advanced data-driven techniques.
PINNs face persistent challenges with stiffness in gradient flow, which limits their predictive capabilities.
This paper presents an improved PINN to mitigate gradient-related failures.
arXiv Detail & Related papers (2024-07-28T07:58:10Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Improving Parametric Neural Networks for High-Energy Physics (and
Beyond) [0.0]
We aim at deepening the understanding of Parametric Neural Network (pNN) networks in light of real-world usage.
We propose an alternative parametrization scheme, resulting in a new parametrized neural network architecture: the AffinePNN.
We extensively evaluate our models on the HEPMASS dataset, along its imbalanced version (called HEPMASS-IMB)
arXiv Detail & Related papers (2022-02-01T14:18:43Z) - Local Repair of Neural Networks Using Optimization [13.337627875398393]
We propose a framework to repair a pre-trained feed-forward neural network (NN)
We formulate the properties as a set of predicates that impose constraints on the output of NN over the target input domain.
We demonstrate the application of our framework in bounding an affine transformation, correcting an erroneous NN in classification, and bounding the inputs of a NN controller.
arXiv Detail & Related papers (2021-09-28T20:52:26Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.