Small-data Reduced Order Modeling of Chaotic Dynamics through SyCo-AE:
Synthetically Constrained Autoencoders
- URL: http://arxiv.org/abs/2305.08036v1
- Date: Sun, 14 May 2023 00:42:24 GMT
- Title: Small-data Reduced Order Modeling of Chaotic Dynamics through SyCo-AE:
Synthetically Constrained Autoencoders
- Authors: Andrey A. Popov, Renato Zanetti
- Abstract summary: Data-driven reduced order modeling of chaotic dynamics can result in systems that either dissipate or diverge catastrophically.
We aim to solve this problem by imposing a synthetic constraint in the reduced order space.
The synthetic constraint allows our reduced order model both the freedom to remain fully non-linear and highly unstable while preventing divergence.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven reduced order modeling of chaotic dynamics can result in systems
that either dissipate or diverge catastrophically. Leveraging non-linear
dimensionality reduction of autoencoders and the freedom of non-linear operator
inference with neural-networks, we aim to solve this problem by imposing a
synthetic constraint in the reduced order space. The synthetic constraint
allows our reduced order model both the freedom to remain fully non-linear and
highly unstable while preventing divergence. We illustrate the methodology with
the classical 40-variable Lorenz '96 equations, showing that our methodology is
capable of producing medium-to-long range forecasts with lower error using less
data.
Related papers
- Bridging Autoencoders and Dynamic Mode Decomposition for Reduced-order Modeling and Control of PDEs [12.204795159651589]
This paper explores a deep autocodingen learning method for reduced-order modeling and control of dynamical systems governed by Ptemporals.
We first show that an objective for learning a linear autoen reduced-order model can be formulated to yield a solution closely resembling the result obtained through the dynamic mode decomposition with control algorithm.
We then extend this linear autoencoding architecture to a deep autocoding framework, enabling the development of a nonlinear reduced-order model.
arXiv Detail & Related papers (2024-09-09T22:56:40Z) - Constrained Synthesis with Projected Diffusion Models [47.56192362295252]
This paper introduces an approach to generative diffusion processes the ability to satisfy and certify compliance with constraints and physical principles.
The proposed method recast the traditional process of generative diffusion as a constrained distribution problem to ensure adherence to constraints.
arXiv Detail & Related papers (2024-02-05T22:18:16Z) - Data-driven Nonlinear Model Reduction using Koopman Theory: Integrated
Control Form and NMPC Case Study [56.283944756315066]
We propose generic model structures combining delay-coordinate encoding of measurements and full-state decoding to integrate reduced Koopman modeling and state estimation.
A case study demonstrates that our approach provides accurate control models and enables real-time capable nonlinear model predictive control of a high-purity cryogenic distillation column.
arXiv Detail & Related papers (2024-01-09T11:54:54Z) - A Pseudo-Semantic Loss for Autoregressive Models with Logical
Constraints [87.08677547257733]
Neuro-symbolic AI bridges the gap between purely symbolic and neural approaches to learning.
We show how to maximize the likelihood of a symbolic constraint w.r.t the neural network's output distribution.
We also evaluate our approach on Sudoku and shortest-path prediction cast as autoregressive generation.
arXiv Detail & Related papers (2023-12-06T20:58:07Z) - Learning Nonlinear Projections for Reduced-Order Modeling of Dynamical
Systems using Constrained Autoencoders [0.0]
We introduce a class of nonlinear projections described by constrained autoencoder neural networks in which both the manifold and the projection fibers are learned from data.
Our architecture uses invertible activation functions and biorthogonal weight matrices to ensure that the encoder is a left inverse of the decoder.
We also introduce new dynamics-aware cost functions that promote learning of oblique projection fibers that account for fast dynamics and nonnormality.
arXiv Detail & Related papers (2023-07-28T04:01:48Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Non-linear manifold ROM with Convolutional Autoencoders and Reduced
Over-Collocation method [0.0]
Non-affine parametric dependencies, nonlinearities and advection-dominated regimes of the model of interest can result in a slow Kolmogorov n-width decay.
We implement the non-linear manifold method introduced by Carlberg et al [37] with hyper-reduction achieved through reduced over-collocation and teacher-student training of a reduced decoder.
We test the methodology on a 2d non-linear conservation law and a 2d shallow water models, and compare the results obtained with a purely data-driven method for which the dynamics is evolved in time with a long-short term memory network
arXiv Detail & Related papers (2022-03-01T11:16:50Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - Non-intrusive Nonlinear Model Reduction via Machine Learning
Approximations to Low-dimensional Operators [0.0]
We propose a method that enables traditionally intrusive reduced-order models to be accurately approximated in a non-intrusive manner.
The approach approximates the low-dimensional operators associated with projection-based reduced-order models (ROMs) using modern machine-learning regression techniques.
In addition to enabling nonintrusivity, we demonstrate that the approach also leads to very low computational complexity, achieving up to $1000times$ reduction in run time.
arXiv Detail & Related papers (2021-06-17T17:04:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.