Continuity-Preserving Convolutional Autoencoders for Learning Continuous Latent Dynamical Models from Images
- URL: http://arxiv.org/abs/2502.00754v1
- Date: Sun, 02 Feb 2025 11:31:58 GMT
- Title: Continuity-Preserving Convolutional Autoencoders for Learning Continuous Latent Dynamical Models from Images
- Authors: Aiqing Zhu, Yuting Pan, Qianxiao Li,
- Abstract summary: Continuous dynamical systems are cornerstones of many scientific and engineering disciplines.
We propose continuity-preserving convolutional autoencoders (CpAEs) to learn continuous latent states and their corresponding continuous latent dynamical models from discrete image frames.
- Score: 12.767281330110626
- License:
- Abstract: Continuous dynamical systems are cornerstones of many scientific and engineering disciplines. While machine learning offers powerful tools to model these systems from trajectory data, challenges arise when these trajectories are captured as images, resulting in pixel-level observations that are discrete in nature. Consequently, a naive application of a convolutional autoencoder can result in latent coordinates that are discontinuous in time. To resolve this, we propose continuity-preserving convolutional autoencoders (CpAEs) to learn continuous latent states and their corresponding continuous latent dynamical models from discrete image frames. We present a mathematical formulation for learning dynamics from image frames, which illustrates issues with previous approaches and motivates our methodology based on promoting the continuity of convolution filters, thereby preserving the continuity of the latent states. This approach enables CpAEs to produce latent states that evolve continuously with the underlying dynamics, leading to more accurate latent dynamical models. Extensive experiments across various scenarios demonstrate the effectiveness of CpAEs.
Related papers
- Neural SDEs as a Unified Approach to Continuous-Domain Sequence Modeling [3.8980564330208662]
We propose a novel and intuitive approach to continuous sequence modeling.
Our method interprets time-series data as textitdiscrete samples from an underlying continuous dynamical system.
We derive a maximum principled objective and a textitsimulation-free scheme for efficient training of our Neural SDE model.
arXiv Detail & Related papers (2025-01-31T03:47:22Z) - Community-Aware Temporal Walks: Parameter-Free Representation Learning on Continuous-Time Dynamic Graphs [3.833708891059351]
Community-aware Temporal Walks (CTWalks) is a novel framework for representation learning on continuous-time dynamic graphs.
CTWalks integrates a community-based parameter-free temporal walk sampling mechanism, an anonymization strategy enriched with community labels, and an encoding process.
Experiments on benchmark datasets demonstrate that CTWalks outperforms established methods in temporal link prediction tasks.
arXiv Detail & Related papers (2025-01-21T04:16:46Z) - A Poisson-Gamma Dynamic Factor Model with Time-Varying Transition Dynamics [51.147876395589925]
A non-stationary PGDS is proposed to allow the underlying transition matrices to evolve over time.
A fully-conjugate and efficient Gibbs sampler is developed to perform posterior simulation.
Experiments show that, in comparison with related models, the proposed non-stationary PGDS achieves improved predictive performance.
arXiv Detail & Related papers (2024-02-26T04:39:01Z) - Learning In-between Imagery Dynamics via Physical Latent Spaces [0.7366405857677226]
We present a framework designed to learn the underlying dynamics between two images observed at consecutive time steps.
By incorporating a latent variable that follows a physical model expressed in partial differential equations (PDEs), our approach ensures the interpretability of the learned model.
We demonstrate the robustness and effectiveness of our learning framework through a series of numerical tests using geoscientific imagery data.
arXiv Detail & Related papers (2023-10-14T05:14:51Z) - Neural Continuous-Discrete State Space Models for Irregularly-Sampled
Time Series [18.885471782270375]
NCDSSM employs auxiliary variables to disentangle recognition from dynamics, thus requiring amortized inference only for the auxiliary variables.
We propose three flexible parameterizations of the latent dynamics and an efficient training objective that marginalizes the dynamic states during inference.
Empirical results on multiple benchmark datasets show improved imputation and forecasting performance of NCDSSM over existing models.
arXiv Detail & Related papers (2023-01-26T18:45:04Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - Continuous Latent Process Flows [47.267251969492484]
Partial observations of continuous time-series dynamics at arbitrary time stamps exist in many disciplines. Fitting this type of data using statistical models with continuous dynamics is not only promising at an intuitive level but also has practical benefits.
We tackle these challenges with continuous latent process flows (CLPF), a principled architecture decoding continuous latent processes into continuous observable processes using a time-dependent normalizing flow driven by a differential equation.
Our ablation studies demonstrate the effectiveness of our contributions in various inference tasks on irregular time grids.
arXiv Detail & Related papers (2021-06-29T17:16:04Z) - Causal Navigation by Continuous-time Neural Networks [108.84958284162857]
We propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks.
We evaluate our method in the context of visual-control learning of drones over a series of complex tasks.
arXiv Detail & Related papers (2021-06-15T17:45:32Z) - Value Iteration in Continuous Actions, States and Time [99.00362538261972]
We propose a continuous fitted value iteration (cFVI) algorithm for continuous states and actions.
The optimal policy can be derived for non-linear control-affine dynamics.
Videos of the physical system are available at urlhttps://sites.google.com/view/value-iteration.
arXiv Detail & Related papers (2021-05-10T21:40:56Z) - Learning Temporal Dynamics from Cycles in Narrated Video [85.89096034281694]
We propose a self-supervised solution to the problem of learning to model how the world changes as time elapses.
Our model learns modality-agnostic functions to predict forward and backward in time, which must undo each other when composed.
We apply the learned dynamics model without further training to various tasks, such as predicting future action and temporally ordering sets of images.
arXiv Detail & Related papers (2021-01-07T02:41:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.