Anisotropic, Sparse and Interpretable Physics-Informed Neural Networks
for PDEs
- URL: http://arxiv.org/abs/2207.00377v1
- Date: Fri, 1 Jul 2022 12:24:43 GMT
- Title: Anisotropic, Sparse and Interpretable Physics-Informed Neural Networks
for PDEs
- Authors: Amuthan A. Ramabathiran and Prabhu Ramachandran
- Abstract summary: We present ASPINN, an anisotropic extension of our earlier work called SPINN--Sparse, Physics-informed, and Interpretable Neural Networks--to solve PDEs.
ASPINNs generalize radial basis function networks.
We also streamline the training of ASPINNs into a form that is closer to that of supervised learning algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been a growing interest in the use of Deep Neural Networks (DNNs)
to solve Partial Differential Equations (PDEs). Despite the promise that such
approaches hold, there are various aspects where they could be improved. Two
such shortcomings are (i) their computational inefficiency relative to
classical numerical methods, and (ii) the non-interpretability of a trained DNN
model. In this work we present ASPINN, an anisotropic extension of our earlier
work called SPINN--Sparse, Physics-informed, and Interpretable Neural
Networks--to solve PDEs that addresses both these issues. ASPINNs generalize
radial basis function networks. We demonstrate using a variety of examples
involving elliptic and hyperbolic PDEs that the special architecture we propose
is more efficient than generic DNNs, while at the same time being directly
interpretable. Further, they improve upon the SPINN models we proposed earlier
in that fewer nodes are require to capture the solution using ASPINN than using
SPINN, thanks to the anisotropy of the local zones of influence of each node.
The interpretability of ASPINN translates to a ready visualization of their
weights and biases, thereby yielding more insight into the nature of the
trained model. This in turn provides a systematic procedure to improve the
architecture based on the quality of the computed solution. ASPINNs thus serve
as an effective bridge between classical numerical algorithms and modern DNN
based methods to solve PDEs. In the process, we also streamline the training of
ASPINNs into a form that is closer to that of supervised learning algorithms.
Related papers
- Ensemble learning for Physics Informed Neural Networks: a Gradient Boosting approach [10.250994619846416]
We present a new training paradigm referred to as "gradient boosting" (GB)
Instead of learning the solution of a given PDE using a single neural network directly, our algorithm employs a sequence of neural networks to achieve a superior outcome.
This work also unlocks the door to employing ensemble learning techniques in PINNs.
arXiv Detail & Related papers (2023-02-25T19:11:44Z) - RBF-MGN:Solving spatiotemporal PDEs with Physics-informed Graph Neural
Network [4.425915683879297]
We propose a novel framework based on graph neural networks (GNNs) and radial basis function finite difference (RBF-FD)
RBF-FD is used to construct a high-precision difference format of the differential equations to guide model training.
We illustrate the generalizability, accuracy, and efficiency of the proposed algorithms on different PDE parameters.
arXiv Detail & Related papers (2022-12-06T10:08:02Z) - Quantum-Inspired Tensor Neural Networks for Partial Differential
Equations [5.963563752404561]
Deep learning methods are constrained by training time and memory. To tackle these shortcomings, we implement Neural Networks (TNN)
We demonstrate that TNN provide significant parameter savings while attaining the same accuracy as compared to the classical Neural Network (DNN)
arXiv Detail & Related papers (2022-08-03T17:41:11Z) - Enforcing Continuous Physical Symmetries in Deep Learning Network for
Solving Partial Differential Equations [3.6317085868198467]
We introduce a new method, symmetry-enhanced physics informed neural network (SPINN) where the invariant surface conditions induced by the Lie symmetries of PDEs are embedded into the loss function of PINN.
We show that SPINN performs better than PINN with fewer training points and simpler architecture of neural network.
arXiv Detail & Related papers (2022-06-19T00:44:22Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Improved Training of Physics-Informed Neural Networks with Model
Ensembles [81.38804205212425]
We propose to expand the solution interval gradually to make the PINN converge to the correct solution.
All ensemble members converge to the same solution in the vicinity of observed data.
We show experimentally that the proposed method can improve the accuracy of the found solution.
arXiv Detail & Related papers (2022-04-11T14:05:34Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - SPINN: Sparse, Physics-based, and Interpretable Neural Networks for PDEs [0.0]
We introduce a class of Sparse, Physics-based, and Interpretable Neural Networks (SPINN) for solving ordinary and partial differential equations.
By reinterpreting a traditional meshless representation of solutions of PDEs as a special sparse deep neural network, we develop a class of sparse neural network architectures that are interpretable.
arXiv Detail & Related papers (2021-02-25T17:45:50Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.