Latent Dynamics Networks (LDNets): learning the intrinsic dynamics of
spatio-temporal processes
- URL: http://arxiv.org/abs/2305.00094v1
- Date: Fri, 28 Apr 2023 21:11:13 GMT
- Title: Latent Dynamics Networks (LDNets): learning the intrinsic dynamics of
spatio-temporal processes
- Authors: Francesco Regazzoni and Stefano Pagani and Matteo Salvador and Luca
Dede' and Alfio Quarteroni
- Abstract summary: Latent Dynamics Network (LDNet) is able to discover low-dimensional intrinsic dynamics of possibly non-Markovian dynamical systems.
LDNets are lightweight and easy-to-train, with excellent accuracy and generalization properties, even in time-extrapolation regimes.
- Score: 2.3694122563610924
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Predicting the evolution of systems that exhibit spatio-temporal dynamics in
response to external stimuli is a key enabling technology fostering scientific
innovation. Traditional equations-based approaches leverage first principles to
yield predictions through the numerical approximation of high-dimensional
systems of differential equations, thus calling for large-scale parallel
computing platforms and requiring large computational costs. Data-driven
approaches, instead, enable the description of systems evolution in
low-dimensional latent spaces, by leveraging dimensionality reduction and deep
learning algorithms. We propose a novel architecture, named Latent Dynamics
Network (LDNet), which is able to discover low-dimensional intrinsic dynamics
of possibly non-Markovian dynamical systems, thus predicting the time evolution
of space-dependent fields in response to external inputs. Unlike popular
approaches, in which the latent representation of the solution manifold is
learned by means of auto-encoders that map a high-dimensional discretization of
the system state into itself, LDNets automatically discover a low-dimensional
manifold while learning the latent dynamics, without ever operating in the
high-dimensional space. Furthermore, LDNets are meshless algorithms that do not
reconstruct the output on a predetermined grid of points, but rather at any
point of the domain, thus enabling weight-sharing across query-points. These
features make LDNets lightweight and easy-to-train, with excellent accuracy and
generalization properties, even in time-extrapolation regimes. We validate our
method on several test cases and we show that, for a challenging
highly-nonlinear problem, LDNets outperform state-of-the-art methods in terms
of accuracy (normalized error 5 times smaller), by employing a dramatically
smaller number of trainable parameters (more than 10 times fewer).
Related papers
- Parametric Taylor series based latent dynamics identification neural networks [0.3139093405260182]
A new latent identification of nonlinear dynamics, P-TLDINets, is introduced.
It relies on a novel neural network structure based on Taylor series expansion and ResNets.
arXiv Detail & Related papers (2024-10-05T15:10:32Z) - Liquid Fourier Latent Dynamics Networks for fast GPU-based numerical simulations in computational cardiology [0.0]
We propose an extension of Latent Dynamics Networks (LDNets) to create parameterized space-time surrogate models for multiscale and multiphysics sets of highly nonlinear differential equations on complex geometries.
LFLDNets employ a neurologically-inspired, sparse liquid neural network for temporal dynamics, relaxing the requirement of a numerical solver for time advancement and leading to superior performance in terms of parameters, accuracy, efficiency and learned trajectories.
arXiv Detail & Related papers (2024-08-19T09:14:25Z) - Systematic construction of continuous-time neural networks for linear dynamical systems [0.0]
We discuss a systematic approach to constructing neural architectures for modeling a subclass of dynamical systems.
We use a variant of continuous-time neural networks in which the output of each neuron evolves continuously as a solution of a first-order or second-order Ordinary Differential Equation (ODE)
Instead of deriving the network architecture and parameters from data, we propose a gradient-free algorithm to compute sparse architecture and network parameters directly from the given LTI system.
arXiv Detail & Related papers (2024-03-24T16:16:41Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Deep Learning-based surrogate models for parametrized PDEs: handling
geometric variability through graph neural networks [0.0]
This work explores the potential usage of graph neural networks (GNNs) for the simulation of time-dependent PDEs.
We propose a systematic strategy to build surrogate models based on a data-driven time-stepping scheme.
We show that GNNs can provide a valid alternative to traditional surrogate models in terms of computational efficiency and generalization to new scenarios.
arXiv Detail & Related papers (2023-08-03T08:14:28Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Neural Galerkin Schemes with Active Learning for High-Dimensional
Evolution Equations [44.89798007370551]
This work proposes Neural Galerkin schemes based on deep learning that generate training data with active learning for numerically solving high-dimensional partial differential equations.
Neural Galerkin schemes build on the Dirac-Frenkel variational principle to train networks by minimizing the residual sequentially over time.
Our finding is that the active form of gathering training data of the proposed Neural Galerkin schemes is key for numerically realizing the expressive power of networks in high dimensions.
arXiv Detail & Related papers (2022-03-02T19:09:52Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.