Dynamics is what you need for time-series forecasting!
- URL: http://arxiv.org/abs/2507.15774v1
- Date: Mon, 21 Jul 2025 16:29:29 GMT
- Title: Dynamics is what you need for time-series forecasting!
- Authors: Alexis-Raja Brachet, Pierre-Yves Richard, Céline Hudelot,
- Abstract summary: We develop an original $textttPRO-DYN$ nomenclature to analyze existing models through the lens of dynamics.<n>We conduct extensive experiments to confirm our observations on a set of performance-varying models with diverse backbones.
- Score: 6.775534755081169
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: While boundaries between data modalities are vanishing, the usual successful deep models are still challenged by simple ones in the time-series forecasting task. Our hypothesis is that this task needs models that are able to learn the data underlying dynamics. We propose to validate it through both systemic and empirical studies. We develop an original $\texttt{PRO-DYN}$ nomenclature to analyze existing models through the lens of dynamics. Two observations thus emerged: $\textbf{1}$. under-performing architectures learn dynamics at most partially, $\textbf{2}$. the location of the dynamics block at the model end is of prime importance. We conduct extensive experiments to confirm our observations on a set of performance-varying models with diverse backbones. Results support the need to incorporate a learnable dynamics block and its use as the final predictor.
Related papers
- Intention-Conditioned Flow Occupancy Models [69.79049994662591]
Large-scale pre-training has fundamentally changed how machine learning research is done today.<n>Applying this same framework to reinforcement learning is appealing because it offers compelling avenues for addressing core challenges in RL.<n>Recent advances in generative AI have provided new tools for modeling highly complex distributions.
arXiv Detail & Related papers (2025-06-10T15:27:46Z) - Why Diffusion Models Don't Memorize: The Role of Implicit Dynamical Regularization in Training [8.824077990271503]
We investigate the role of the training dynamics in the transition from generalization to memorization.<n>We find that $tau_mathrmmem$ increases linearly with the training set size $n$, while $tau_mathrmgen$ remains constant.<n>It is only when $n$ becomes larger than a model-dependent threshold that overfitting disappears at infinite training times.
arXiv Detail & Related papers (2025-05-23T08:58:47Z) - Learning to Walk from Three Minutes of Real-World Data with Semi-structured Dynamics Models [9.318262213262866]
We introduce a novel framework for learning semi-structured dynamics models for contact-rich systems.
We make accurate long-horizon predictions with substantially less data than prior methods.
We validate our approach on a real-world Unitree Go1 quadruped robot.
arXiv Detail & Related papers (2024-10-11T18:11:21Z) - Neural Persistence Dynamics [8.197801260302642]
We consider the problem of learning the dynamics in the topology of time-evolving point clouds.
Our proposed model - $textitNeural Persistence Dynamics$ - substantially outperforms the state-of-the-art across a diverse set of parameter regression tasks.
arXiv Detail & Related papers (2024-05-24T17:20:18Z) - Generalization capabilities and robustness of hybrid models grounded in physics compared to purely deep learning models [2.8686437689115363]
This study investigates the generalization capabilities and robustness of purely deep learning (DL) models and hybrid models based on physical principles in fluid dynamics applications.<n>Three autoregressive models were compared: a hybrid model (POD-DL) that combines proper decomposition (POD) with a long-short term memory (LSTM) layer, a convolutional autoencoder combined with a convolutional LSTM layer, and a variational autoencoder (VAE) combined with a ConvLSTM layer.<n>While the VAE and ConvLSTM models accurately predicted laminar flow, the hybrid POD-DL model outperformed the others
arXiv Detail & Related papers (2024-04-27T12:43:02Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Learning Differential Operators for Interpretable Time Series Modeling [34.32259687441212]
We propose a learning framework that can automatically obtain interpretable PDE models from sequential data.
Our model can provide valuable interpretability and achieve comparable performance to state-of-the-art models.
arXiv Detail & Related papers (2022-09-03T20:14:31Z) - Learning Multi-Object Dynamics with Compositional Neural Radiance Fields [63.424469458529906]
We present a method to learn compositional predictive models from image observations based on implicit object encoders, Neural Radiance Fields (NeRFs), and graph neural networks.
NeRFs have become a popular choice for representing scenes due to their strong 3D prior.
For planning, we utilize RRTs in the learned latent space, where we can exploit our model and the implicit object encoder to make sampling the latent space informative and more efficient.
arXiv Detail & Related papers (2022-02-24T01:31:29Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - Which priors matter? Benchmarking models for learning latent dynamics [70.88999063639146]
Several methods have proposed to integrate priors from classical mechanics into machine learning models.
We take a sober look at the current capabilities of these models.
We find that the use of continuous and time-reversible dynamics benefits models of all classes.
arXiv Detail & Related papers (2021-11-09T23:48:21Z) - Learning Dynamics Models for Model Predictive Agents [28.063080817465934]
Model-Based Reinforcement Learning involves learning a textitdynamics model from data, and then using this model to optimise behaviour.
This paper sets out to disambiguate the role of different design choices for learning dynamics models, by comparing their performance to planning with a ground-truth model.
arXiv Detail & Related papers (2021-09-29T09:50:25Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z) - Beyond Occam's Razor in System Identification: Double-Descent when
Modeling Dynamics [0.0]
System identification aims to build models of dynamical systems from data.
It is typically observed that model validation performance follows a U-shaped curve as the model complexity increases.
Recent developments in machine learning and statistics have observed situations where a "double-descent" curve subsumes this U-shaped model-performance curve.
arXiv Detail & Related papers (2020-12-11T13:34:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.