Physics-Informed Neural Networks for High-Frequency and Multi-Scale
Problems using Transfer Learning
- URL: http://arxiv.org/abs/2401.02810v2
- Date: Mon, 15 Jan 2024 13:10:12 GMT
- Title: Physics-Informed Neural Networks for High-Frequency and Multi-Scale
Problems using Transfer Learning
- Authors: Abdul Hannan Mustajab, Hao Lyu, Zarghaam Rizvi, Frank Wuttke
- Abstract summary: Physics-informed neural network (PINN) is a data-driven solver for partial and ordinary differential equations(ODEs/PDEs)
We propose using transfer learning to boost the robustness and convergence of training PINN.
We elaborately described our training strategy, including selection, and suggested guidelines for using transfer learning to train neural networks for solving more complex problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Physics-informed neural network (PINN) is a data-driven solver for partial
and ordinary differential equations(ODEs/PDEs). It provides a unified framework
to address both forward and inverse problems. However, the complexity of the
objective function often leads to training failures. This issue is particularly
prominent when solving high-frequency and multi-scale problems. We proposed
using transfer learning to boost the robustness and convergence of training
PINN, starting training from low-frequency problems and gradually approaching
high-frequency problems. Through two case studies, we discovered that transfer
learning can effectively train PINN to approximate solutions from low-frequency
problems to high-frequency problems without increasing network parameters.
Furthermore, it requires fewer data points and less training time. We
elaborately described our training strategy, including optimizer selection, and
suggested guidelines for using transfer learning to train neural networks for
solving more complex problems.
Related papers
- Multi-level datasets training method in Physics-Informed Neural Networks [0.0]
PINNs struggle with the challenging problems which are stiff to be solved and/or have high-frequency components in the solutions.
In this study, an alternative approach is proposed to mitigate the above-mentioned problems.
Inspired by the multi-grid method in CFD community, the underlying idea of the current approach is to efficiently remove different frequency errors via training.
arXiv Detail & Related papers (2025-04-30T05:30:27Z) - Deep Parallel Spectral Neural Operators for Solving Partial Differential Equations with Enhanced Low-Frequency Learning Capability [11.121415128908566]
We propose a Deep Parallel Spectral Neural Operator (DPNO) to enhance the ability to learn low-frequency information.
Our method enhances the neural operator's ability to learn low-frequency information through parallel modules.
We smooth this information through convolutional mappings, thereby reducing high-frequency errors.
arXiv Detail & Related papers (2024-09-30T06:04:04Z) - The Unreasonable Effectiveness of Solving Inverse Problems with Neural Networks [24.766470360665647]
We show that neural networks trained to learn solutions to inverse problems can find better solutions than classicals even on their training set.
Our findings suggest an alternative use for neural networks: rather than generalizing to new data for fast inference, they can also be used to find better solutions on known data.
arXiv Detail & Related papers (2024-08-15T12:38:10Z) - Adaptive recurrent vision performs zero-shot computation scaling to
unseen difficulty levels [6.053394076324473]
We investigate whether adaptive computation can also enable vision models to extrapolate solutions beyond their training distribution's difficulty level.
We combine convolutional recurrent neural networks (ConvRNNs) with a learnable mechanism based on Graves: PathFinder and Mazes.
We show that AdRNNs learn to dynamically halt processing early (or late) to solve easier (or harder) problems, 2) these RNNs zero-shot generalize to more difficult problem settings not shown during training by dynamically increasing the number of recurrent at test time.
arXiv Detail & Related papers (2023-11-12T21:07:04Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - Emulation Learning for Neuromimetic Systems [0.0]
Building on our recent research on neural quantization systems, results on learning quantized motions and resilience to channel dropouts are reported.
We propose a general Deep Q Network (DQN) algorithm that can not only learn the trajectory but also exhibit the advantages of resilience to channel dropout.
arXiv Detail & Related papers (2023-05-04T22:47:39Z) - Incremental Spatial and Spectral Learning of Neural Operators for
Solving Large-Scale PDEs [86.35471039808023]
We introduce the Incremental Fourier Neural Operator (iFNO), which progressively increases the number of frequency modes used by the model.
We show that iFNO reduces total training time while maintaining or improving generalization performance across various datasets.
Our method demonstrates a 10% lower testing error, using 20% fewer frequency modes compared to the existing Fourier Neural Operator, while also achieving a 30% faster training.
arXiv Detail & Related papers (2022-11-28T09:57:15Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Hierarchical Learning to Solve Partial Differential Equations Using
Physics-Informed Neural Networks [2.0305676256390934]
We propose a hierarchical approach to improve the convergence rate and accuracy of the neural network solution to partial differential equations.
We validate the efficiency and robustness of the proposed hierarchical approach through a suite of linear and nonlinear partial differential equations.
arXiv Detail & Related papers (2021-12-02T13:53:42Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Inverse-Dirichlet Weighting Enables Reliable Training of Physics
Informed Neural Networks [2.580765958706854]
We describe and remedy a failure mode that may arise from multi-scale dynamics with scale imbalances during training of deep neural networks.
PINNs are popular machine-learning templates that allow for seamless integration of physical equation models with data.
For inverse modeling using sequential training, we find that inverse-Dirichlet weighting protects a PINN against catastrophic forgetting.
arXiv Detail & Related papers (2021-07-02T10:01:37Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.