Multivariate Temporal Autoencoder for Predictive Reconstruction of Deep
Sequences
- URL: http://arxiv.org/abs/2010.03661v1
- Date: Wed, 7 Oct 2020 21:25:35 GMT
- Title: Multivariate Temporal Autoencoder for Predictive Reconstruction of Deep
Sequences
- Authors: Jakob Aungiers
- Abstract summary: Time series sequence prediction and modelling has proven to be a challenging endeavor in real world datasets.
Two key issues are the multi-dimensionality of data and the interaction of independent dimensions forming a latent output signal.
This paper proposes a multi-branch deep neural network approach to tackling the aforementioned problems by modelling a latent state vector representation of data windows.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Time series sequence prediction and modelling has proven to be a challenging
endeavor in real world datasets. Two key issues are the multi-dimensionality of
data and the interaction of independent dimensions forming a latent output
signal, as well as the representation of multi-dimensional temporal data inside
of a predictive model. This paper proposes a multi-branch deep neural network
approach to tackling the aforementioned problems by modelling a latent state
vector representation of data windows through the use of a recurrent
autoencoder branch and subsequently feeding the trained latent vector
representation into a predictor branch of the model. This model is henceforth
referred to as Multivariate Temporal Autoencoder (MvTAe). The framework in this
paper utilizes a synthetic multivariate temporal dataset which contains
dimensions that combine to create a hidden output target.
Related papers
- PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting [82.03373838627606]
Self-attention mechanism in Transformer architecture requires positional embeddings to encode temporal order in time series prediction.
We argue that this reliance on positional embeddings restricts the Transformer's ability to effectively represent temporal sequences.
We present a model integrating PRE with a standard Transformer encoder, demonstrating state-of-the-art performance on various real-world datasets.
arXiv Detail & Related papers (2024-08-20T01:56:07Z) - ARFA: An Asymmetric Receptive Field Autoencoder Model for Spatiotemporal
Prediction [55.30913411696375]
We propose an Asymmetric Receptive Field Autoencoder (ARFA) model, which introduces corresponding sizes of receptive field modules.
In the encoder, we present large kernel module for globaltemporal feature extraction. In the decoder, we develop a small kernel module for localtemporal reconstruction.
We construct the RainBench, a large-scale radar echo dataset for precipitation prediction, to address the scarcity of meteorological data in the domain.
arXiv Detail & Related papers (2023-09-01T07:55:53Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Unsupervised Multiple-Object Tracking with a Dynamical Variational
Autoencoder [25.293475313066967]
We present an unsupervised probabilistic model and associated estimation algorithm for multi-object tracking (MOT) based on a dynamical variational autoencoder (DVAE)
DVAE is a latent-variable deep generative model that can be seen as an extension of the variational autoencoder for the modeling of temporal sequences.
It is included in DVAE-UMOT to model the objects' dynamics, after being pre-trained on an unlabeled synthetic dataset single-object trajectories.
arXiv Detail & Related papers (2022-02-18T17:27:27Z) - Contrastive predictive coding for Anomaly Detection in Multi-variate
Time Series Data [6.463941665276371]
We propose a Time-series Representational Learning through Contrastive Predictive Coding (TRL-CPC) towards anomaly detection in MVTS data.
First, we jointly optimize an encoder, an auto-regressor and a non-linear transformation function to effectively learn the representations of the MVTS data sets.
arXiv Detail & Related papers (2022-02-08T04:25:29Z) - Robust Audio Anomaly Detection [10.75127981612396]
The presented approach doesn't assume the presence of labeled anomalies in the training dataset.
The temporal dynamics are modeled using recurrent layers augmented with attention mechanism.
The output of the network is an outlier robust probability density function.
arXiv Detail & Related papers (2022-02-03T17:19:42Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Dynamical Variational Autoencoders: A Comprehensive Review [23.25573952809074]
We introduce and discuss a general class of models, called dynamical variational autoencoders (DVAEs)
We present in detail seven recently proposed DVAE models, with an aim to homogenize the notations and presentation lines.
We have reimplemented those seven DVAE models and present the results of an experimental benchmark conducted on the speech analysis-resynthesis task.
arXiv Detail & Related papers (2020-08-28T11:49:33Z) - Interpretable Deep Representation Learning from Temporal Multi-view Data [4.2179426073904995]
We propose a generative model based on variational autoencoder and a recurrent neural network to infer the latent dynamics for multi-view temporal data.
We invoke our proposed model for analyzing three datasets on which we demonstrate the effectiveness and the interpretability of the model.
arXiv Detail & Related papers (2020-05-11T15:59:06Z) - Variational Hyper RNN for Sequence Modeling [69.0659591456772]
We propose a novel probabilistic sequence model that excels at capturing high variability in time series data.
Our method uses temporal latent variables to capture information about the underlying data pattern.
The efficacy of the proposed method is demonstrated on a range of synthetic and real-world sequential data.
arXiv Detail & Related papers (2020-02-24T19:30:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.