PINNfluence: Influence Functions for Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2409.08958v2
- Date: Sun, 01 Dec 2024 06:47:45 GMT
- Title: PINNfluence: Influence Functions for Physics-Informed Neural Networks
- Authors: Jonas R. Naujoks, Aleksander Krasowski, Moritz Weckbecker, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek, René P. Klausen,
- Abstract summary: Physics-informed neural networks (PINNs) have emerged as a flexible and promising application of deep learning to partial differential equations in the physical sciences.
We explore the application of influence functions (IFs) to validate and debug PINNs post-hoc.
- Score: 47.27512105490682
- License:
- Abstract: Recently, physics-informed neural networks (PINNs) have emerged as a flexible and promising application of deep learning to partial differential equations in the physical sciences. While offering strong performance and competitive inference speeds on forward and inverse problems, their black-box nature limits interpretability, particularly regarding alignment with expected physical behavior. In the present work, we explore the application of influence functions (IFs) to validate and debug PINNs post-hoc. Specifically, we apply variations of IF-based indicators to gauge the influence of different types of collocation points on the prediction of PINNs applied to a 2D Navier-Stokes fluid flow problem. Our results demonstrate how IFs can be adapted to PINNs to reveal the potential for further studies. The code is publicly available at https://github.com/aleks-krasowski/PINNfluence.
Related papers
- GINN-KAN: Interpretability pipelining with applications in Physics Informed Neural Networks [5.2969467015867915]
We introduce the concept of interpretability pipelineing, to incorporate multiple interpretability techniques to outperform each individual technique.
We evaluate two recent models selected for their potential to incorporate interpretability into standard neural network architectures.
We introduce a novel interpretable neural network GINN-KAN that synthesizes the advantages of both models.
arXiv Detail & Related papers (2024-08-27T04:57:53Z) - Element-wise Multiplication Based Deeper Physics-Informed Neural Networks [1.8554335256160261]
PINNs are a promising framework for resolving partial differential equations (PDEs)
Lack of expressive ability and pathology issues are found to prevent the application of PINNs in complex PDEs.
We propose Deeper Physics-Informed Neural Network (Deeper-PINN) to resolve these issues.
arXiv Detail & Related papers (2024-06-06T15:27:52Z) - Predictive Limitations of Physics-Informed Neural Networks in Vortex
Shedding [0.0]
We look at the flow around a 2D cylinder and find that data-free PINNs are unable to predict vortex shedding.
Data-driven PINN exhibits vortex shedding only while the training data is available, but reverts to the steady state solution when the data flow stops.
The distribution of the Koopman eigenvalues on the complex plane suggests that PINN is numerically dispersive and diffusive.
arXiv Detail & Related papers (2023-05-31T22:59:52Z) - On the Generalization of PINNs outside the training domain and the
Hyperparameters influencing it [1.3927943269211593]
PINNs are Neural Network architectures trained to emulate solutions of differential equations without the necessity of solution data.
We perform an empirical analysis of the behavior of PINN predictions outside their training domain.
We assess whether the algorithmic setup of PINNs can influence their potential for generalization and showcase the respective effect on the prediction.
arXiv Detail & Related papers (2023-02-15T09:51:56Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - Scientific Machine Learning through Physics-Informed Neural Networks:
Where we are and What's next [5.956366179544257]
Physic-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations.
PINNs are nowadays used to solve PDEs, fractional equations, and integral-differential equations.
arXiv Detail & Related papers (2022-01-14T19:05:44Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Influence Functions in Deep Learning Are Fragile [52.31375893260445]
influence functions approximate the effect of samples in test-time predictions.
influence estimates are fairly accurate for shallow networks.
Hessian regularization is important to get highquality influence estimates.
arXiv Detail & Related papers (2020-06-25T18:25:59Z) - Phase Detection with Neural Networks: Interpreting the Black Box [58.720142291102135]
Neural networks (NNs) usually hinder any insight into the reasoning behind their predictions.
We demonstrate how influence functions can unravel the black box of NN when trained to predict the phases of the one-dimensional extended spinless Fermi-Hubbard model at half-filling.
arXiv Detail & Related papers (2020-04-09T17:45:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.