Parametric Taylor series based latent dynamics identification neural networks
- URL: http://arxiv.org/abs/2410.04193v1
- Date: Sat, 5 Oct 2024 15:10:32 GMT
- Title: Parametric Taylor series based latent dynamics identification neural networks
- Authors: Xinlei Lin, Dunhui Xiao,
- Abstract summary: A new latent identification of nonlinear dynamics, P-TLDINets, is introduced.
It relies on a novel neural network structure based on Taylor series expansion and ResNets.
- Score: 0.3139093405260182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerical solving parameterised partial differential equations (P-PDEs) is highly practical yet computationally expensive, driving the development of reduced-order models (ROMs). Recently, methods that combine latent space identification techniques with deep learning algorithms (e.g., autoencoders) have shown great potential in describing the dynamical system in the lower dimensional latent space, for example, LaSDI, gLaSDI and GPLaSDI. In this paper, a new parametric latent identification of nonlinear dynamics neural networks, P-TLDINets, is introduced, which relies on a novel neural network structure based on Taylor series expansion and ResNets to learn the ODEs that govern the reduced space dynamics. During the training process, Taylor series-based Latent Dynamic Neural Networks (TLDNets) and identified equations are trained simultaneously to generate a smoother latent space. In order to facilitate the parameterised study, a $k$-nearest neighbours (KNN) method based on an inverse distance weighting (IDW) interpolation scheme is introduced to predict the identified ODE coefficients using local information. Compared to other latent dynamics identification methods based on autoencoders, P-TLDINets remain the interpretability of the model. Additionally, it circumvents the building of explicit autoencoders, avoids dependency on specific grids, and features a more lightweight structure, which is easy to train with high generalisation capability and accuracy. Also, it is capable of using different scales of meshes. P-TLDINets improve training speeds nearly hundred times compared to GPLaSDI and gLaSDI, maintaining an $L_2$ error below $2\%$ compared to high-fidelity models.
Related papers
- Liquid Fourier Latent Dynamics Networks for fast GPU-based numerical simulations in computational cardiology [0.0]
We propose an extension of Latent Dynamics Networks (LDNets) to create parameterized space-time surrogate models for multiscale and multiphysics sets of highly nonlinear differential equations on complex geometries.
LFLDNets employ a neurologically-inspired, sparse liquid neural network for temporal dynamics, relaxing the requirement of a numerical solver for time advancement and leading to superior performance in terms of parameters, accuracy, efficiency and learned trajectories.
arXiv Detail & Related papers (2024-08-19T09:14:25Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Latent Dynamics Networks (LDNets): learning the intrinsic dynamics of
spatio-temporal processes [2.3694122563610924]
Latent Dynamics Network (LDNet) is able to discover low-dimensional intrinsic dynamics of possibly non-Markovian dynamical systems.
LDNets are lightweight and easy-to-train, with excellent accuracy and generalization properties, even in time-extrapolation regimes.
arXiv Detail & Related papers (2023-04-28T21:11:13Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - gLaSDI: Parametric Physics-informed Greedy Latent Space Dynamics
Identification [0.5249805590164902]
A physics-informed greedy Latent Space Dynamics Identification (gLa) method is proposed for accurate, efficient, and robust data-driven reduced-order modeling.
An interactive training algorithm is adopted for the autoencoder and local DI models, which enables identification of simple latent-space dynamics.
The effectiveness of the proposed framework is demonstrated by modeling various nonlinear dynamical problems.
arXiv Detail & Related papers (2022-04-26T00:15:46Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Learning POD of Complex Dynamics Using Heavy-ball Neural ODEs [7.388910452780173]
We leverage the recently proposed heavy-ball neural ODEs (HBNODEs) for learning data-driven reduced-order models.
HBNODE enjoys several practical advantages for learning POD-based ROMs with theoretical guarantees.
arXiv Detail & Related papers (2022-02-24T22:00:25Z) - Accelerating Neural ODEs Using Model Order Reduction [0.0]
We show that mathematical model order reduction methods can be used for compressing and accelerating Neural ODEs.
We implement our novel compression method by developing Neural ODEs that integrate the necessary subspace-projection and operations as layers of the neural network.
arXiv Detail & Related papers (2021-05-28T19:27:09Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.