DiffGrad for Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2409.03239v1
- Date: Thu, 5 Sep 2024 04:39:35 GMT
- Title: DiffGrad for Physics-Informed Neural Networks
- Authors: Jamshaid Ul Rahman, Nimra,
- Abstract summary: Burgers' equation, a fundamental equation in fluid dynamics that is extensively used in PINNs, provides flexible results with the Adamprop.
This paper introduces a novel strategy for solving Burgers' equation by incorporating DiffGrad with PINNs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-Informed Neural Networks (PINNs) are regarded as state-of-the-art tools for addressing highly nonlinear problems based on partial differential equations. Despite their broad range of applications, PINNs encounter several performance challenges, including issues related to efficiency, minimization of computational cost, and enhancement of accuracy. Burgers' equation, a fundamental equation in fluid dynamics that is extensively used in PINNs, provides flexible results with the Adam optimizer that does not account for past gradients. This paper introduces a novel strategy for solving Burgers' equation by incorporating DiffGrad with PINNs, a method that leverages the difference between current and immediately preceding gradients to enhance performance. A comprehensive computational analysis is conducted using optimizers such as Adam, Adamax, RMSprop, and DiffGrad to evaluate and compare their effectiveness. Our approach includes visualizing the solutions over space at various time intervals to demonstrate the accuracy of the network. The results show that DiffGrad not only improves the accuracy of the solution but also reduces training time compared to the other optimizers.
Related papers
- Densely Multiplied Physics Informed Neural Networks [1.8554335256160261]
physics-informed neural networks (PINNs) have shown great potential in dealing with nonlinear partial differential equations (PDEs)
This paper improves the neural network architecture to improve the performance of PINN.
We propose a densely multiply PINN (DM-PINN) architecture, which multiplies the output of a hidden layer with the outputs of all the behind hidden layers.
arXiv Detail & Related papers (2024-02-06T20:45:31Z) - Burgers' pinns with implicit euler transfer learning [0.0]
The Burgers equation is a well-established test case in the computational modeling of several phenomena.
We present the application of Physics-Informed Neural Networks (PINNs) with an implicit Euler transfer learning approach to solve the Burgers equation.
arXiv Detail & Related papers (2023-10-23T20:15:45Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - RBF-MGN:Solving spatiotemporal PDEs with Physics-informed Graph Neural
Network [4.425915683879297]
We propose a novel framework based on graph neural networks (GNNs) and radial basis function finite difference (RBF-FD)
RBF-FD is used to construct a high-precision difference format of the differential equations to guide model training.
We illustrate the generalizability, accuracy, and efficiency of the proposed algorithms on different PDE parameters.
arXiv Detail & Related papers (2022-12-06T10:08:02Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.