Generative Learning for Slow Manifolds and Bifurcation Diagrams
- URL: http://arxiv.org/abs/2504.20375v1
- Date: Tue, 29 Apr 2025 02:38:44 GMT
- Title: Generative Learning for Slow Manifolds and Bifurcation Diagrams
- Authors: Ellis R. Crabtree, Dimitris G. Giovanis, Nikolaos Evangelou, Juan M. Bello-Rivas, Ioannis G. Kevrekidis,
- Abstract summary: Conditional score-based generative models (cSGMs) have demonstrated capabilities in generating plausible data from target distributions conditioned on some given label.<n>We present a framework for using cSGMs to quickly initialize on a low-dimensional (reduced-order) slow manifold of a multi-time-scale system.<n>This conditional sampling can help uncover the geometry of the reduced slow-manifold and/or approximately fill in'' missing segments of steady states in a bifurcation diagram.
- Score: 0.35587965024910395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In dynamical systems characterized by separation of time scales, the approximation of so called ``slow manifolds'', on which the long term dynamics lie, is a useful step for model reduction. Initializing on such slow manifolds is a useful step in modeling, since it circumvents fast transients, and is crucial in multiscale algorithms alternating between fine scale (fast) and coarser scale (slow) simulations. In a similar spirit, when one studies the infinite time dynamics of systems depending on parameters, the system attractors (e.g., its steady states) lie on bifurcation diagrams. Sampling these manifolds gives us representative attractors (here, steady states of ODEs or PDEs) at different parameter values. Algorithms for the systematic construction of these manifolds are required parts of the ``traditional'' numerical nonlinear dynamics toolkit. In more recent years, as the field of Machine Learning develops, conditional score-based generative models (cSGMs) have demonstrated capabilities in generating plausible data from target distributions that are conditioned on some given label. It is tempting to exploit such generative models to produce samples of data distributions conditioned on some quantity of interest (QoI). In this work, we present a framework for using cSGMs to quickly (a) initialize on a low-dimensional (reduced-order) slow manifold of a multi-time-scale system consistent with desired value(s) of a QoI (a ``label'') on the manifold, and (b) approximate steady states in a bifurcation diagram consistent with a (new, out-of-sample) parameter value. This conditional sampling can help uncover the geometry of the reduced slow-manifold and/or approximately ``fill in'' missing segments of steady states in a bifurcation diagram.
Related papers
- Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces novel deep dynamical models designed to represent continuous-time sequences.<n>We train the model using maximum likelihood estimation with Markov chain Monte Carlo.<n> Experimental results on oscillating systems, videos and real-world state sequences (MuJoCo) demonstrate that our model with the learnable energy-based prior outperforms existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Enhancing Computational Efficiency in Multiscale Systems Using Deep Learning of Coordinates and Flow Maps [0.0]
This paper showcases how deep learning techniques can be used to develop a precise time-stepping approach for multiscale systems.
The resulting framework achieves state-of-the-art predictive accuracy while incurring lesser computational costs.
arXiv Detail & Related papers (2024-04-28T14:05:13Z) - Machine-Learned Closure of URANS for Stably Stratified Turbulence: Connecting Physical Timescales & Data Hyperparameters of Deep Time-Series Models [0.0]
We develop time-series machine learning (ML) methods for closure modeling of the Unsteady Reynolds Averaged Navier Stokes equations.
We consider decaying SST which are homogeneous and stably stratified by a uniform density gradient.
We find that the ratio of the timescales of the minimum information required by the ML models to accurately capture the dynamics of the SST corresponds to the Reynolds number of the flow.
arXiv Detail & Related papers (2024-04-24T18:58:00Z) - Micro-Macro Consistency in Multiscale Modeling: Score-Based Model
Assisted Sampling of Fast/Slow Dynamical Systems [0.0]
In the study of physics-based multi-time-scale dynamical systems, techniques have been developed for enhancing sampling.
In the field of Machine Learning, a generic goal of generative models is to sample from a target density, after training on empirical samples from this density.
In this work, we show that that SGMs can be used in such a coupling framework to improve sampling in multiscale dynamical systems.
arXiv Detail & Related papers (2023-12-10T00:46:37Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - Markov Chain Monte Carlo for Continuous-Time Switching Dynamical Systems [26.744964200606784]
We propose a novel inference algorithm utilizing a Markov Chain Monte Carlo approach.
The presented Gibbs sampler allows to efficiently obtain samples from the exact continuous-time posterior processes.
arXiv Detail & Related papers (2022-05-18T09:03:00Z) - Riemannian Score-Based Generative Modeling [56.20669989459281]
We introduce score-based generative models (SGMs) demonstrating remarkable empirical performance.
Current SGMs make the underlying assumption that the data is supported on a Euclidean manifold with flat geometry.
This prevents the use of these models for applications in robotics, geoscience or protein modeling.
arXiv Detail & Related papers (2022-02-06T11:57:39Z) - Traversing Time with Multi-Resolution Gaussian Process State-Space
Models [17.42262122708566]
We propose a novel Gaussian process state-space architecture composed of multiple components, each trained on a different resolution, to model effects on different timescales.
We benchmark our novel method on semi-synthetic data and on an engine modeling task.
In both experiments, our approach compares favorably against its state-of-the-art alternatives that operate on a single time-scale only.
arXiv Detail & Related papers (2021-12-06T18:39:27Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.