Adaptive Interface-PINNs (AdaI-PINNs): An Efficient Physics-informed Neural Networks Framework for Interface Problems
- URL: http://arxiv.org/abs/2406.04626v2
- Date: Mon, 10 Jun 2024 16:28:15 GMT
- Title: Adaptive Interface-PINNs (AdaI-PINNs): An Efficient Physics-informed Neural Networks Framework for Interface Problems
- Authors: Sumanta Roy, Chandrasekhar Annavarapu, Pratanu Roy, Antareep Kumar Sarma,
- Abstract summary: We present an efficient physics-informed neural networks (PINNs) framework, termed Adaptive Interface-PINNs (AdaI-PINNs)
This framework is an enhanced version of its predecessor, Interface PINNs or I-PINNs.
In AdaI-PINNs, the activation functions vary solely in their slopes, which are trained along with the other parameters of the neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an efficient physics-informed neural networks (PINNs) framework, termed Adaptive Interface-PINNs (AdaI-PINNs), to improve the modeling of interface problems with discontinuous coefficients and/or interfacial jumps. This framework is an enhanced version of its predecessor, Interface PINNs or I-PINNs (Sarma et al.; https://dx.doi.org/10.2139/ssrn.4766623), which involves domain decomposition and assignment of different predefined activation functions to the neural networks in each subdomain across a sharp interface, while keeping all other parameters of the neural networks identical. In AdaI-PINNs, the activation functions vary solely in their slopes, which are trained along with the other parameters of the neural networks. This makes the AdaI-PINNs framework fully automated without requiring preset activation functions. Comparative studies on one-dimensional, two-dimensional, and three-dimensional benchmark elliptic interface problems reveal that AdaI-PINNs outperform I-PINNs, reducing computational costs by 2-6 times while producing similar or better accuracy.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
Neuromorphic computing uses spiking neural networks (SNNs) to perform inference tasks.
embedding a small payload within each spike exchanged between spiking neurons can enhance inference accuracy without increasing energy consumption.
split computing - where an SNN is partitioned across two devices - is a promising solution.
This paper presents the first comprehensive study of a neuromorphic wireless split computing architecture that employs multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Improved physics-informed neural network in mitigating gradient related failures [11.356695216531328]
Physics-informed neural networks (PINNs) integrate fundamental physical principles with advanced data-driven techniques.
PINNs face persistent challenges with stiffness in gradient flow, which limits their predictive capabilities.
This paper presents an improved PINN to mitigate gradient-related failures.
arXiv Detail & Related papers (2024-07-28T07:58:10Z) - Binary structured physics-informed neural networks for solving equations
with rapidly changing solutions [3.6415476576196055]
Physics-informed neural networks (PINNs) have emerged as a promising approach for solving partial differential equations (PDEs)
We propose a binary structured physics-informed neural network (BsPINN) framework, which employs binary structured neural network (BsNN) as the neural network component.
BsPINNs exhibit superior convergence speed and heightened accuracy compared to PINNs.
arXiv Detail & Related papers (2024-01-23T14:37:51Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - A Dimension-Augmented Physics-Informed Neural Network (DaPINN) with High
Level Accuracy and Efficiency [0.20391237204597357]
Physics-informed neural networks (PINNs) have been widely applied in different fields.
We propose a novel dimension-augmented physics-informed neural network (DaPINN)
DaPINN simultaneously and significantly improves the accuracy and efficiency of the PINN.
arXiv Detail & Related papers (2022-10-19T15:54:37Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - MRF-PINN: A Multi-Receptive-Field convolutional physics-informed neural
network for solving partial differential equations [6.285167805465505]
Physics-informed neural networks (PINN) can achieve lower development and solving cost than traditional partial differential equation (PDE) solvers.
Due to the advantages of parameter sharing, spatial feature extraction and low inference cost, convolutional neural networks (CNN) are increasingly used in PINN.
arXiv Detail & Related papers (2022-09-06T12:26:22Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Neural Parameter Allocation Search [57.190693718951316]
Training neural networks requires increasing amounts of memory.
Existing methods assume networks have many identical layers and utilize hand-crafted sharing strategies that fail to generalize.
We introduce Neural Allocation Search (NPAS), a novel task where the goal is to train a neural network given an arbitrary, fixed parameter budget.
NPAS covers both low-budget regimes, which produce compact networks, as well as a novel high-budget regime, where additional capacity can be added to boost performance without increasing inference FLOPs.
arXiv Detail & Related papers (2020-06-18T15:01:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.