Deep Explicit Duration Switching Models for Time Series
- URL: http://arxiv.org/abs/2110.13878v1
- Date: Tue, 26 Oct 2021 17:35:21 GMT
- Title: Deep Explicit Duration Switching Models for Time Series
- Authors: Abdul Fatir Ansari, Konstantinos Benidis, Richard Kurle, Ali Caner
Turkmen, Harold Soh, Alexander J. Smola, Yuyang Wang, Tim Januschowski
- Abstract summary: We propose a flexible model that is capable of identifying both state- and time-dependent switching dynamics.
State-dependent switching is enabled by a recurrent state-to-switch connection.
An explicit duration count variable is used to improve the time-dependent switching behavior.
- Score: 84.33678003781908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many complex time series can be effectively subdivided into distinct regimes
that exhibit persistent dynamics. Discovering the switching behavior and the
statistical patterns in these regimes is important for understanding the
underlying dynamical system. We propose the Recurrent Explicit Duration
Switching Dynamical System (RED-SDS), a flexible model that is capable of
identifying both state- and time-dependent switching dynamics. State-dependent
switching is enabled by a recurrent state-to-switch connection and an explicit
duration count variable is used to improve the time-dependent switching
behavior. We demonstrate how to perform efficient inference using a hybrid
algorithm that approximates the posterior of the continuous states via an
inference network and performs exact inference for the discrete switches and
counts. The model is trained by maximizing a Monte Carlo lower bound of the
marginal log-likelihood that can be computed efficiently as a byproduct of the
inference routine. Empirical results on multiple datasets demonstrate that
RED-SDS achieves considerable improvement in time series segmentation and
competitive forecasting performance against the state of the art.
Related papers
- Amortized Control of Continuous State Space Feynman-Kac Model for Irregular Time Series [14.400596021890863]
Many real-world datasets, such as healthcare, climate, and economics, are often collected as irregular time series.
We propose the Amortized Control of continuous State Space Model (ACSSM) for continuous dynamical modeling of time series.
arXiv Detail & Related papers (2024-10-08T01:27:46Z) - When to Sense and Control? A Time-adaptive Approach for Continuous-Time RL [37.58940726230092]
Reinforcement learning (RL) excels in optimizing policies for discrete-time Markov decision processes (MDP)
We formalize an RL framework, Time-adaptive Control & Sensing (TaCoS), that tackles this challenge.
We demonstrate that state-of-the-art RL algorithms trained on TaCoS drastically reduce the interaction amount over their discrete-time counterpart.
arXiv Detail & Related papers (2024-06-03T09:57:18Z) - A Poisson-Gamma Dynamic Factor Model with Time-Varying Transition Dynamics [51.147876395589925]
A non-stationary PGDS is proposed to allow the underlying transition matrices to evolve over time.
A fully-conjugate and efficient Gibbs sampler is developed to perform posterior simulation.
Experiments show that, in comparison with related models, the proposed non-stationary PGDS achieves improved predictive performance.
arXiv Detail & Related papers (2024-02-26T04:39:01Z) - Causal Temporal Regime Structure Learning [49.77103348208835]
We introduce a new optimization-based method (linear) that concurrently learns the Directed Acyclic Graph (DAG) for each regime.
We conduct extensive experiments and show that our method consistently outperforms causal discovery models across various settings.
arXiv Detail & Related papers (2023-11-02T17:26:49Z) - Gated Recurrent Neural Networks with Weighted Time-Delay Feedback [59.125047512495456]
We introduce a novel gated recurrent unit (GRU) with a weighted time-delay feedback mechanism.
We show that $tau$-GRU can converge faster and generalize better than state-of-the-art recurrent units and gated recurrent architectures.
arXiv Detail & Related papers (2022-12-01T02:26:34Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Deep Switching State Space Model (DS$^3$M) for Nonlinear Time Series
Forecasting with Regime Switching [3.3970049571884204]
We propose a deep switching state space model (DS$3$M) for efficient inference and forecasting of nonlinear time series.
The switching among regimes is captured by both discrete and continuous latent variables with recurrent neural networks.
arXiv Detail & Related papers (2021-06-04T08:25:47Z) - Learning Continuous-Time Dynamics by Stochastic Differential Networks [32.63114111531396]
We propose a flexible continuous-time recurrent neural network named Variational Differential Networks (VSDN)
VSDN embeds the complicated dynamics of the sporadic time series by neural Differential Equations (SDE)
We show that VSDNs outperform state-of-the-art continuous-time deep learning models and achieve remarkable performance on prediction and tasks for sporadic time series.
arXiv Detail & Related papers (2020-06-11T01:40:34Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.