TGPT-PINN: Nonlinear model reduction with transformed GPT-PINNs
- URL: http://arxiv.org/abs/2403.03459v1
- Date: Wed, 6 Mar 2024 04:49:18 GMT
- Title: TGPT-PINN: Nonlinear model reduction with transformed GPT-PINNs
- Authors: Yanlai Chen, Yajie Ji, Akil Narayan, Zhenli Xu
- Abstract summary: We introduce the Transformed Generative Pre-Trained Physics-Informed Neural Networks (TGPT-PINN)
TGPT-PINN is a network-of-networks design achieving snapshot-based model reduction.
We demonstrate this new capability for nonlinear model reduction in the PINNs framework by several non-trivial partial differential equations.
- Score: 1.6093211760643649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the Transformed Generative Pre-Trained Physics-Informed Neural
Networks (TGPT-PINN) for accomplishing nonlinear model order reduction (MOR) of
transport-dominated partial differential equations in an MOR-integrating PINNs
framework. Building on the recent development of the GPT-PINN that is a
network-of-networks design achieving snapshot-based model reduction, we design
and test a novel paradigm for nonlinear model reduction that can effectively
tackle problems with parameter-dependent discontinuities. Through incorporation
of a shock-capturing loss function component as well as a parameter-dependent
transform layer, the TGPT-PINN overcomes the limitations of linear model
reduction in the transport-dominated regime. We demonstrate this new capability
for nonlinear model reduction in the PINNs framework by several nontrivial
parametric partial differential equations.
Related papers
- Toward Efficient Spiking Transformers: Synapse Pruning Meets Synergistic Learning-Based Compensation [5.496016535669561]
We propose combining synapse pruning with a synergistic learning-based compensation strategy to derive lightweight Transformer-based models.<n>Experiments on benchmark datasets demonstrate that the proposed methods significantly reduce model size and computational overhead while maintaining competitive performance.
arXiv Detail & Related papers (2025-08-04T02:19:38Z) - ProPINN: Demystifying Propagation Failures in Physics-Informed Neural Networks [71.02216400133858]
Physics-informed neural networks (PINNs) have earned high expectations in solving partial differential equations (PDEs)
Previous research observed the propagation failure phenomenon of PINNs.
This paper provides the first formal and in-depth study of propagation failure and its root cause.
arXiv Detail & Related papers (2025-02-02T13:56:38Z) - Controlling Grokking with Nonlinearity and Data Symmetry [0.0]
Plotting the even PCA projections of the weights of the last NN layer against their odd projections yields patterns which become significantly more uniform when the nonlinearity is increased.
A metric for the generalization ability of the network is inferred from the entropy of the layer weights while the degree of nonlinearity is related to correlations between the local entropy of the weights of the neurons in the final layer.
arXiv Detail & Related papers (2024-11-08T06:19:29Z) - Sig-Splines: universal approximation and convex calibration of time
series generative models [0.0]
Our algorithm incorporates linear transformations and the signature transform as a seamless substitution for traditional neural networks.
This approach enables us to achieve not only the universality property inherent in neural networks but also introduces convexity in the model's parameters.
arXiv Detail & Related papers (2023-07-19T05:58:21Z) - Unifying Model-Based and Neural Network Feedforward: Physics-Guided
Neural Networks with Linear Autoregressive Dynamics [0.0]
This paper develops a feedforward control framework to compensate unknown nonlinear dynamics.
The feedforward controller is parametrized as a parallel combination of a physics-based model and a neural network.
arXiv Detail & Related papers (2022-09-26T08:01:28Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - Neural-network acceleration of projection-based model-order-reduction
for finite plasticity: Application to RVEs [0.0]
A neural network is developed to accelerate a projection-based model-order-reduction of an RVE.
The online simulations are equation-free, meaning that no system of equations needs to be solved iteratively.
arXiv Detail & Related papers (2021-09-16T06:45:22Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned
Edge Learning Over Broadband Channels [69.18343801164741]
partitioned edge learning (PARTEL) implements parameter-server training, a well known distributed learning method, in wireless network.
We consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables.
arXiv Detail & Related papers (2020-10-08T15:27:50Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.