GD-VAEs: Geometric Dynamic Variational Autoencoders for Learning Nonlinear Dynamics and Dimension Reductions
- URL: http://arxiv.org/abs/2206.05183v3
- Date: Sun, 08 Dec 2024 03:01:40 GMT
- Title: GD-VAEs: Geometric Dynamic Variational Autoencoders for Learning Nonlinear Dynamics and Dimension Reductions
- Authors: Ryan Lopez, Paul J. Atzberger,
- Abstract summary: We develop data-driven methods to learn parsimonious representations of nonlinear dynamics from observations.
The approaches learn nonlinear state-space models of the dynamics for general manifold latent spaces.
Motivated by problems arising in parameterized PDEs and physics, we investigate the performance of our methods on tasks for learning reduced dimensional representations.
- Score: 0.0
- License:
- Abstract: We develop data-driven methods incorporating geometric and topological information to learn parsimonious representations of nonlinear dynamics from observations. The approaches learn nonlinear state-space models of the dynamics for general manifold latent spaces using training strategies related to Variational Autoencoders (VAEs). Our methods are referred to as Geometric Dynamic (GD) Variational Autoencoders (GD-VAEs). We learn encoders and decoders for the system states and evolution based on deep neural network architectures that include general Multilayer Perceptrons (MLPs), Convolutional Neural Networks (CNNs), and other architectures. Motivated by problems arising in parameterized PDEs and physics, we investigate the performance of our methods on tasks for learning reduced dimensional representations of the nonlinear Burgers Equations, Constrained Mechanical Systems, and spatial fields of Reaction-Diffusion Systems. GD-VAEs provide methods that can be used to obtain representations in manifold latent spaces for diverse learning tasks involving dynamics.
Related papers
- Generalized Factor Neural Network Model for High-dimensional Regression [50.554377879576066]
We tackle the challenges of modeling high-dimensional data sets with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships.
Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression.
arXiv Detail & Related papers (2025-02-16T23:13:55Z) - LeARN: Learnable and Adaptive Representations for Nonlinear Dynamics in System Identification [0.0]
We introduce a nonlinear system identification framework called LeARN.
It transcends the need for prior domain knowledge by learning the library of basis functions directly from data.
We validate our framework on the Neural Fly dataset, showcasing its robust adaptation and capabilities.
arXiv Detail & Related papers (2024-12-16T18:03:23Z) - Neural Port-Hamiltonian Differential Algebraic Equations for Compositional Learning of Electrical Networks [20.12750360095627]
We develop compositional learning algorithms for coupled dynamical systems.
We use neural networks to parametrize unknown terms in differential and algebraic components of a port-Hamiltonian DAE.
We train individual N-PHDAE models for separate grid components, before coupling them to accurately predict the behavior of larger-scale networks.
arXiv Detail & Related papers (2024-12-15T15:13:11Z) - You are out of context! [0.0]
New data can act as forces stretching, compressing, or twisting the geometric relationships learned by a model.
We propose a novel drift detection methodology for machine learning (ML) models based on the concept of ''deformation'' in the vector space representation of data.
arXiv Detail & Related papers (2024-11-04T10:17:43Z) - AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - SINDyG: Sparse Identification of Nonlinear Dynamical Systems from Graph-Structured Data [0.27624021966289597]
We develop a new method called Sparse Identification of Dynamical Systems from Graph-structured data (SINDyG)
SINDyG incorporates the network structure into sparse regression to identify model parameters that explain the underlying network dynamics.
Our experiments validate the improved accuracy and simplicity of discovered network dynamics.
arXiv Detail & Related papers (2024-09-02T17:51:37Z) - Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Variational Autoencoders for Learning Nonlinear Dynamics of Physical
Systems [0.0]
We develop data-driven methods for incorporating physical information for priors to learn parsimonious representations of nonlinear systems.
Our approach is based on Variational Autoencoders (VAEs) for learning from observations nonlinear state space models.
arXiv Detail & Related papers (2020-12-07T05:00:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.