Deep Dynamic Factor Models
- URL: http://arxiv.org/abs/2007.11887v2
- Date: Sat, 20 May 2023 14:37:33 GMT
- Title: Deep Dynamic Factor Models
- Authors: Paolo Andreini, Cosimo Izzo and Giovanni Ricco
- Abstract summary: A novel deep neural network framework -- that we refer to as Deep Dynamic Factor Model (D$2$FM) -- is able to encode the information available.
By design, the latent states of the model can still be interpreted as in a standard factor model.
- Score: 0.5156484100374059
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A novel deep neural network framework -- that we refer to as Deep Dynamic
Factor Model (D$^2$FM) --, is able to encode the information available, from
hundreds of macroeconomic and financial time-series into a handful of
unobserved latent states. While similar in spirit to traditional dynamic factor
models (DFMs), differently from those, this new class of models allows for
nonlinearities between factors and observables due to the autoencoder neural
network structure. However, by design, the latent states of the model can still
be interpreted as in a standard factor model. Both in a fully real-time
out-of-sample nowcasting and forecasting exercise with US data and in a Monte
Carlo experiment, the D$^2$FM improves over the performances of a
state-of-the-art DFM.
Related papers
- Multi-Head Self-Attending Neural Tucker Factorization [5.734615417239977]
We introduce a neural network-based tensor factorization approach tailored for learning representations of high-dimensional and incomplete (HDI) tensors.
The proposed MSNTucF model demonstrates superior performance compared to state-of-the-art benchmark models in estimating missing observations.
arXiv Detail & Related papers (2025-01-16T13:04:15Z) - Deep Learning for Koopman Operator Estimation in Idealized Atmospheric Dynamics [2.2489531925874013]
Deep learning is revolutionizing weather forecasting, with new data-driven models achieving accuracy on par with operational physical models for medium-term predictions.
These models often lack interpretability, making their underlying dynamics difficult to understand and explain.
This paper proposes methodologies to estimate the Koopman operator, providing a linear representation of complex nonlinear dynamics to enhance the transparency of data-driven models.
arXiv Detail & Related papers (2024-09-10T13:56:54Z) - Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces novel deep dynamical models designed to represent continuous-time sequences.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experimental results on oscillating systems, videos and real-world state sequences (MuJoCo) demonstrate that our model with the learnable energy-based prior outperforms existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Learning Differential Operators for Interpretable Time Series Modeling [34.32259687441212]
We propose a learning framework that can automatically obtain interpretable PDE models from sequential data.
Our model can provide valuable interpretability and achieve comparable performance to state-of-the-art models.
arXiv Detail & Related papers (2022-09-03T20:14:31Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.