Generative learning for nonlinear dynamics
- URL: http://arxiv.org/abs/2311.04128v1
- Date: Tue, 7 Nov 2023 16:53:56 GMT
- Title: Generative learning for nonlinear dynamics
- Authors: William Gilpin
- Abstract summary: generative machine learning models create realistic outputs far beyond their training data.
These successes suggest that generative models learn to effectively parametrize and sample arbitrarily complex distributions.
We aim to connect these classical works to emerging themes in large-scale generative statistical learning.
- Score: 7.6146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern generative machine learning models demonstrate surprising ability to
create realistic outputs far beyond their training data, such as photorealistic
artwork, accurate protein structures, or conversational text. These successes
suggest that generative models learn to effectively parametrize and sample
arbitrarily complex distributions. Beginning half a century ago, foundational
works in nonlinear dynamics used tools from information theory to infer
properties of chaotic attractors from time series, motivating the development
of algorithms for parametrizing chaos in real datasets. In this perspective, we
aim to connect these classical works to emerging themes in large-scale
generative statistical learning. We first consider classical attractor
reconstruction, which mirrors constraints on latent representations learned by
state space models of time series. We next revisit early efforts to use
symbolic approximations to compare minimal discrete generators underlying
complex processes, a problem relevant to modern efforts to distill and
interpret black-box statistical models. Emerging interdisciplinary works bridge
nonlinear dynamics and learning theory, such as operator-theoretic methods for
complex fluid flows, or detection of broken detailed balance in biological
datasets. We anticipate that future machine learning techniques may revisit
other classical concepts from nonlinear dynamics, such as transinformation
decay and complexity-entropy tradeoffs.
Related papers
- Deep Learning for Koopman Operator Estimation in Idealized Atmospheric Dynamics [2.2489531925874013]
Deep learning is revolutionizing weather forecasting, with new data-driven models achieving accuracy on par with operational physical models for medium-term predictions.
These models often lack interpretability, making their underlying dynamics difficult to understand and explain.
This paper proposes methodologies to estimate the Koopman operator, providing a linear representation of complex nonlinear dynamics to enhance the transparency of data-driven models.
arXiv Detail & Related papers (2024-09-10T13:56:54Z) - Relational Learning in Pre-Trained Models: A Theory from Hypergraph Recovery Perspective [60.64922606733441]
We introduce a mathematical model that formalizes relational learning as hypergraph recovery to study pre-training of Foundation Models (FMs)
In our framework, the world is represented as a hypergraph, with data abstracted as random samples from hyperedges. We theoretically examine the feasibility of a Pre-Trained Model (PTM) to recover this hypergraph and analyze the data efficiency in a minimax near-optimal style.
arXiv Detail & Related papers (2024-06-17T06:20:39Z) - eXponential FAmily Dynamical Systems (XFADS): Large-scale nonlinear Gaussian state-space modeling [9.52474299688276]
We introduce a low-rank structured variational autoencoder framework for nonlinear state-space graphical models.
We show that our approach consistently demonstrates the ability to learn a more predictive generative model.
arXiv Detail & Related papers (2024-03-03T02:19:49Z) - Neural Koopman prior for data assimilation [7.875955593012905]
We use a neural network architecture to embed dynamical systems in latent spaces.
We introduce methods that enable to train such a model for long-term continuous reconstruction.
The potential for self-supervised learning is also demonstrated, as we show the promising use of trained dynamical models as priors for variational data assimilation techniques.
arXiv Detail & Related papers (2023-09-11T09:04:36Z) - Learning Differential Operators for Interpretable Time Series Modeling [34.32259687441212]
We propose a learning framework that can automatically obtain interpretable PDE models from sequential data.
Our model can provide valuable interpretability and achieve comparable performance to state-of-the-art models.
arXiv Detail & Related papers (2022-09-03T20:14:31Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Learning Low-Dimensional Quadratic-Embeddings of High-Fidelity Nonlinear
Dynamics using Deep Learning [9.36739413306697]
Learning dynamical models from data plays a vital role in engineering design, optimization, and predictions.
We use deep learning to identify low-dimensional embeddings for high-fidelity dynamical systems.
arXiv Detail & Related papers (2021-11-25T10:09:00Z) - Causal Navigation by Continuous-time Neural Networks [108.84958284162857]
We propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks.
We evaluate our method in the context of visual-control learning of drones over a series of complex tasks.
arXiv Detail & Related papers (2021-06-15T17:45:32Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.