Temporal Lifting as Latent-Space Regularization for Continuous-Time Flow Models in AI Systems
- URL: http://arxiv.org/abs/2510.09805v1
- Date: Fri, 10 Oct 2025 19:06:32 GMT
- Title: Temporal Lifting as Latent-Space Regularization for Continuous-Time Flow Models in AI Systems
- Authors: Jeffrey Camlin,
- Abstract summary: We present a latent-space formulation of adaptive temporal reparametrization for continuous-time dynamical systems.<n>From the standpoint of machine-learning dynamics, temporal lifting acts as a continuous-time normalization or time-warping operator.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a latent-space formulation of adaptive temporal reparametrization for continuous-time dynamical systems. The method, called *temporal lifting*, introduces a smooth monotone mapping $t \mapsto \tau(t)$ that regularizes near-singular behavior of the underlying flow while preserving its conservation laws. In the lifted coordinate, trajectories such as those of the incompressible Navier-Stokes equations on the torus $\mathbb{T}^3$ become globally smooth. From the standpoint of machine-learning dynamics, temporal lifting acts as a continuous-time normalization or time-warping operator that can stabilize physics-informed neural networks and other latent-flow architectures used in AI systems. The framework links analytic regularity theory with representation-learning methods for stiff or turbulent processes.
Related papers
- KoopGen: Koopman Generator Networks for Representing and Predicting Dynamical Systems with Continuous Spectra [65.11254608352982]
We introduce a generator-based neural Koopman framework that models dynamics through a structured, state-dependent representation of Koopman generators.<n>By exploiting the intrinsic Cartesian decomposition into skew-adjoint and self-adjoint components, KoopGen separates conservative transport from irreversible dissipation.
arXiv Detail & Related papers (2026-02-15T06:32:23Z) - Koopman Autoencoders with Continuous-Time Latent Dynamics for Fluid Dynamics Forecasting [17.98687936773676]
We introduce a continuous-time Koopman framework that models latent evolution through numerical integration schemes.<n>By allowing variable timesteps at inference, the method demonstrates robustness to temporal resolution and generalizes beyond training regimes.<n>We evaluate the approach on classical CFD benchmarks and report accuracy, stability, and extrapolation properties.
arXiv Detail & Related papers (2026-02-02T21:33:07Z) - Backpropagation as Physical Relaxation: Exact Gradients in Finite Time [0.0]
''Dyadic Backpropagation'' is the foundational algorithm for training neural networks.<n>We show it emerges exactly as the finite-time relaxation of a physical dynamical system.<n>We prove that unit-step Euler discretization, the natural timescale of layer transitions, recovers standard backpropagation exactly in precisely 2L steps.
arXiv Detail & Related papers (2026-02-02T16:21:05Z) - Curly Flow Matching for Learning Non-gradient Field Dynamics [49.480209466896035]
We introduce Curly Flow Matching (Curly-FM), a novel approach to learning non-gradient field dynamics.<n>Curly-FM is capable of learning non-gradient field dynamics by designing and solving a Schr"odinger bridge problem.<n>Curly-FM can learn trajectories that better match both the reference process and population marginals.
arXiv Detail & Related papers (2025-10-30T16:11:39Z) - Forecasting Continuous Non-Conservative Dynamical Systems in SO(3) [51.510040541600176]
We propose a novel approach to modeling the rotation of moving objects in computer vision.<n>Our approach is agnostic to energy and momentum conservation while being robust to input noise.<n>By learning to approximate object dynamics from noisy states during training, our model attains robust extrapolation capabilities in simulation and various real-world settings.
arXiv Detail & Related papers (2025-08-11T09:03:10Z) - Generative System Dynamics in Recurrent Neural Networks [56.958984970518564]
We investigate the continuous time dynamics of Recurrent Neural Networks (RNNs)<n>We show that skew-symmetric weight matrices are fundamental to enable stable limit cycles in both linear and nonlinear configurations.<n> Numerical simulations showcase how nonlinear activation functions not only maintain limit cycles, but also enhance the numerical stability of the system integration process.
arXiv Detail & Related papers (2025-04-16T10:39:43Z) - Smooth and Sparse Latent Dynamics in Operator Learning with Jerk
Regularization [1.621267003497711]
This paper introduces a continuous operator learning framework that incorporates jagged regularization into the learning of the compressed latent space.
The framework allows for inference at any desired spatial or temporal resolution.
The effectiveness of this framework is demonstrated through a two-dimensional unsteady flow problem governed by the Navier-Stokes equations.
arXiv Detail & Related papers (2024-02-23T22:38:45Z) - Triplet Attention Transformer for Spatiotemporal Predictive Learning [9.059462850026216]
We propose an innovative triplet attention transformer designed to capture both inter-frame dynamics and intra-frame static features.
The model incorporates the Triplet Attention Module (TAM), which replaces traditional recurrent units by exploring self-attention mechanisms in temporal, spatial, and channel dimensions.
arXiv Detail & Related papers (2023-10-28T12:49:33Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Accelerated Continuous-Time Approximate Dynamic Programming via
Data-Assisted Hybrid Control [0.0]
We introduce an algorithm that incorporates dynamic momentum in actor-critic structures to control continuous-time dynamic plants with an affine structure in the input.
By incorporating dynamic momentum in our algorithm, we are able to accelerate the convergence properties of the closed-loop system.
arXiv Detail & Related papers (2022-04-27T05:36:51Z) - Value Iteration in Continuous Actions, States and Time [99.00362538261972]
We propose a continuous fitted value iteration (cFVI) algorithm for continuous states and actions.
The optimal policy can be derived for non-linear control-affine dynamics.
Videos of the physical system are available at urlhttps://sites.google.com/view/value-iteration.
arXiv Detail & Related papers (2021-05-10T21:40:56Z) - Deep Learning of Conjugate Mappings [2.9097303137825046]
Henri Poincar'e first made the connection by tracking consecutive iterations of the continuous flow with a lower-dimensional, transverse subspace.
This work proposes a method for obtaining explicit Poincar'e mappings by using deep learning to construct an invertible coordinate transformation into a conjugate representation.
arXiv Detail & Related papers (2021-04-01T16:29:41Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.