RRAEDy: Adaptive Latent Linearization of Nonlinear Dynamical Systems
- URL: http://arxiv.org/abs/2512.07542v1
- Date: Mon, 08 Dec 2025 13:23:12 GMT
- Title: RRAEDy: Adaptive Latent Linearization of Nonlinear Dynamical Systems
- Authors: Jad Mounayer, Sebastian Rodriguez, Jerome Tomezyk, Chady Ghnatios, Francisco Chinesta,
- Abstract summary: We introduce RRAEDy, a model for learning low-dimensional dynamics in the latent space.<n>We show that RRAEDy achieves accurate and robust predictions.<n>Our code is open-source and available at https://github.com/JadM133/RRAEDy.
- Score: 2.4662459762262894
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Most existing latent-space models for dynamical systems require fixing the latent dimension in advance, they rely on complex loss balancing to approximate linear dynamics, and they don't regularize the latent variables. We introduce RRAEDy, a model that removes these limitations by discovering the appropriate latent dimension, while enforcing both regularized and linearized dynamics in the latent space. Built upon Rank-Reduction Autoencoders (RRAEs), RRAEDy automatically rank and prune latent variables through their singular values while learning a latent Dynamic Mode Decomposition (DMD) operator that governs their temporal progression. This structure-free yet linearly constrained formulation enables the model to learn stable and low-dimensional dynamics without auxiliary losses or manual tuning. We provide theoretical analysis demonstrating the stability of the learned operator and showcase the generality of our model by proposing an extension that handles parametric ODEs. Experiments on canonical benchmarks, including the Van der Pol oscillator, Burgers' equation, 2D Navier-Stokes, and Rotating Gaussians, show that RRAEDy achieves accurate and robust predictions. Our code is open-source and available at https://github.com/JadM133/RRAEDy. We also provide a video summarizing the main results at https://youtu.be/ox70mSSMGrM.
Related papers
- KoopGen: Koopman Generator Networks for Representing and Predicting Dynamical Systems with Continuous Spectra [65.11254608352982]
We introduce a generator-based neural Koopman framework that models dynamics through a structured, state-dependent representation of Koopman generators.<n>By exploiting the intrinsic Cartesian decomposition into skew-adjoint and self-adjoint components, KoopGen separates conservative transport from irreversible dissipation.
arXiv Detail & Related papers (2026-02-15T06:32:23Z) - Disordered Dynamics in High Dimensions: Connections to Random Matrices and Machine Learning [52.26396748560348]
We provide an overview of high dimensional dynamical systems driven by random matrices.<n>We focus on applications to simple models of learning and generalization in machine learning theory.
arXiv Detail & Related papers (2026-01-03T00:12:32Z) - Sparse-to-Field Reconstruction via Stochastic Neural Dynamic Mode Decomposition [12.812771670043212]
Many real-world systems, like wind fields and ocean currents, are dynamic and hard to model.<n> Dynamic Mode Decomposition (DMD) provides a simple, data-driven approximation, but practical use is limited by sparse/noisy observations.<n>We introduce NODE-DMD, a probabilistic extension of DMD that models continuous-time, nonlinear dynamics while remaining interpretable.
arXiv Detail & Related papers (2025-11-25T18:39:50Z) - Towards Fast Coarse-graining and Equation Discovery with Foundation Inference Models [6.403678133359229]
latent dynamics in high-dimensional recordings are often characterized by a much smaller set of effective variables.<n>Most machine learning approaches tackle these tasks jointly by training autoencoders together with models that enforce dynamical consistency.<n>We propose to decouple the two problems by leveraging the recently introduced Foundation Inference Models (FIMs)<n>A proof of concept on a double-well system with semicircle diffusion, embedded into synthetic video data, illustrates the potential of this approach for fast and reusable coarse-graining pipelines.
arXiv Detail & Related papers (2025-10-14T15:17:23Z) - Forecasting Continuous Non-Conservative Dynamical Systems in SO(3) [51.510040541600176]
We propose a novel approach to modeling the rotation of moving objects in computer vision.<n>Our approach is agnostic to energy and momentum conservation while being robust to input noise.<n>By learning to approximate object dynamics from noisy states during training, our model attains robust extrapolation capabilities in simulation and various real-world settings.
arXiv Detail & Related papers (2025-08-11T09:03:10Z) - Learning to Dissipate Energy in Oscillatory State-Space Models [51.98491034847041]
State-space models (SSMs) are a class of networks for sequence learning.<n>We show that D-LinOSS consistently outperforms previous LinOSS methods on long-range learning tasks.
arXiv Detail & Related papers (2025-05-17T23:15:17Z) - Balanced Neural ODEs: nonlinear model order reduction and Koopman operator approximations [0.0]
Variational Autoencoders (VAEs) are a powerful framework for learning latent representations of reduced dimensionality.<n>Neural ODEs excel in learning transient system dynamics.<n>We show that standard Latent ODEs struggle with dimensionality reduction in systems with time-varying inputs.
arXiv Detail & Related papers (2024-10-14T05:45:52Z) - Path-minimizing Latent ODEs for improved extrapolation and inference [0.0]
Latent ODE models provide flexible descriptions of dynamic systems, but they can struggle with extrapolation and predicting complicated non-linear dynamics.
In this paper we exploit this dichotomy by encouraging time-independent latent representations.
By replacing the common variational penalty in latent space with an $ell$ penalty on the path length of each system, the models learn data representations that can easily be distinguished from those of systems with different configurations.
This results in faster training, smaller models, more accurate and long-time extrapolation compared to the baseline ODE models with GRU, RNN, and LSTM/decoders on tests with
arXiv Detail & Related papers (2024-10-11T15:50:01Z) - Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces novel deep dynamical models designed to represent continuous-time sequences.<n>We train the model using maximum likelihood estimation with Markov chain Monte Carlo.<n> Experimental results on oscillating systems, videos and real-world state sequences (MuJoCo) demonstrate that our model with the learnable energy-based prior outperforms existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - Learning and Inference in Sparse Coding Models with Langevin Dynamics [3.0600309122672726]
We describe a system capable of inference and learning in a probabilistic latent variable model.
We demonstrate this idea for a sparse coding model by deriving a continuous-time equation for inferring its latent variables via Langevin dynamics.
We show that Langevin dynamics lead to an efficient procedure for sampling from the posterior distribution in the 'L0 sparse' regime, where latent variables are encouraged to be set to zero as opposed to having a small L1 norm.
arXiv Detail & Related papers (2022-04-23T23:16:47Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.