On Geometry Regularization in Autoencoder Reduced-Order Models with Latent Neural ODE Dynamics
- URL: http://arxiv.org/abs/2603.03238v1
- Date: Tue, 03 Mar 2026 18:31:13 GMT
- Title: On Geometry Regularization in Autoencoder Reduced-Order Models with Latent Neural ODE Dynamics
- Authors: Mikhail Osipov,
- Abstract summary: We investigate geometric regularization strategies for learned latent representations in encoder--decoder reduced-order models.<n>Across multiple seeds, we find that (a)c often produce latent representations that make subsequent latent-dynamics training with a frozen autoencoder more difficult.<n>In contrast, (d) consistently improves conditioning-related diagnostics of the learned latent dynamics and tends to yield better rollout performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate geometric regularization strategies for learned latent representations in encoder--decoder reduced-order models. In a fixed experimental setting for the advection--diffusion--reaction (ADR) equation, we model latent dynamics using a neural ODE and evaluate four regularization approaches applied during autoencoder pre-training: (a) near-isometry regularization of the decoder Jacobian, (b) a stochastic decoder gain penalty based on random directional gains, (c) a second-order directional curvature penalty, and (d) Stiefel projection of the first decoder layer. Across multiple seeds, we find that (a)--(c) often produce latent representations that make subsequent latent-dynamics training with a frozen autoencoder more difficult, especially for long-horizon rollouts, even when they improve local decoder smoothness or related sensitivity proxies. In contrast, (d) consistently improves conditioning-related diagnostics of the learned latent dynamics and tends to yield better rollout performance. We discuss the hypothesis that, in this setting, the downstream impact of latent-geometry mismatch outweighs the benefits of improved decoder smoothness.
Related papers
- From Sparse Sensors to Continuous Fields: STRIDE for Spatiotemporal Reconstruction [3.2580743227673694]
We present STRIDE, a framework that maps high-dimensional spatial fields to a latent state with a temporaltemporal decoder.<n>We show that STRIDE supports super-resolution, supports super-resolution, and remains robust to noise.
arXiv Detail & Related papers (2026-02-04T04:39:23Z) - Parallel Diffusion Solver via Residual Dirichlet Policy Optimization [88.7827307535107]
Diffusion models (DMs) have achieved state-of-the-art generative performance but suffer from high sampling latency due to their sequential denoising nature.<n>Existing solver-based acceleration methods often face significant image quality degradation under a low-dimensional budget.<n>We propose the Ensemble Parallel Direction solver (dubbed as EPD-EPr), a novel ODE solver that mitigates these errors by incorporating multiple gradient parallel evaluations in each step.
arXiv Detail & Related papers (2025-12-28T05:48:55Z) - Zero-Variance Gradients for Variational Autoencoders [32.818968022327866]
Training deep generative models like Variational Autoencoders (VAEs) is often hindered by the need to backpropagate gradients through sampling of their latent variables.<n>In this paper, we propose a new perspective that sidesteps this problem, which we call Silent Gradients.<n>Instead of improving estimators, we leverage specific decoder architectures analytically to compute the expected ELBO, yielding a gradient with zero variance.
arXiv Detail & Related papers (2025-08-05T15:54:21Z) - X$^{2}$-Gaussian: 4D Radiative Gaussian Splatting for Continuous-time Tomographic Reconstruction [64.2059940799033]
Current methods discretize temporal resolution into fixed phases with respiratory gating devices.<n>X$2$-Gaussian, a novel framework, enables continuous-time 4DCT reconstruction by integrating dynamic radiative splatting with self-supervised respiratory motion learning.
arXiv Detail & Related papers (2025-03-27T17:59:57Z) - Augmented Invertible Koopman Autoencoder for long-term time series forecasting [7.875955593012905]
We present the Augmented Invertible Koopman AutoEncoder (AIKAE) as a new class of neural autoencoder-based implementations of the Koopman operator.<n>We demonstrate the relevance of the AIKAE through a series of long-term time series forecasting experiments, on satellite image time series as well as on a benchmark involving predictions based on a large lookback window of observations.
arXiv Detail & Related papers (2025-03-17T08:40:50Z) - Path-minimizing Latent ODEs for improved extrapolation and inference [0.0]
Latent ODE models provide flexible descriptions of dynamic systems, but they can struggle with extrapolation and predicting complicated non-linear dynamics.
In this paper we exploit this dichotomy by encouraging time-independent latent representations.
By replacing the common variational penalty in latent space with an $ell$ penalty on the path length of each system, the models learn data representations that can easily be distinguished from those of systems with different configurations.
This results in faster training, smaller models, more accurate and long-time extrapolation compared to the baseline ODE models with GRU, RNN, and LSTM/decoders on tests with
arXiv Detail & Related papers (2024-10-11T15:50:01Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - On the Encoder-Decoder Incompatibility in Variational Text Modeling and
Beyond [82.18770740564642]
Variational autoencoders (VAEs) combine latent variables with amortized variational inference.
We observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold.
We propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure.
arXiv Detail & Related papers (2020-04-20T10:34:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.