HyperTime: Implicit Neural Representation for Time Series
- URL: http://arxiv.org/abs/2208.05836v1
- Date: Thu, 11 Aug 2022 14:05:51 GMT
- Title: HyperTime: Implicit Neural Representation for Time Series
- Authors: Elizabeth Fons and Alejandro Sztrajman and Yousef El-laham and
Alexandros Iosifidis and Svitlana Vyetrenko
- Abstract summary: Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
- Score: 131.57172578210256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit neural representations (INRs) have recently emerged as a powerful
tool that provides an accurate and resolution-independent encoding of data.
Their robustness as general approximators has been shown in a wide variety of
data sources, with applications on image, sound, and 3D scene representation.
However, little attention has been given to leveraging these architectures for
the representation and analysis of time series data. In this paper, we analyze
the representation of time series using INRs, comparing different activation
functions in terms of reconstruction accuracy and training convergence speed.
We show how these networks can be leveraged for the imputation of time series,
with applications on both univariate and multivariate data. Finally, we propose
a hypernetwork architecture that leverages INRs to learn a compressed latent
representation of an entire time series dataset. We introduce an FFT-based loss
to guide training so that all frequencies are preserved in the time series. We
show that this network can be used to encode time series as INRs, and their
embeddings can be interpolated to generate new time series from existing ones.
We evaluate our generative method by using it for data augmentation, and show
that it is competitive against current state-of-the-art approaches for
augmentation of time series.
Related papers
- Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Representation Learning of Multivariate Time Series using Attention and
Adversarial Training [2.0577627277681887]
A Transformer-based autoencoder is proposed that is regularized using an adversarial training scheme to generate artificial time series signals.
Our results indicate that the generated signals exhibit higher similarity to an exemplary dataset than using a convolutional network approach.
arXiv Detail & Related papers (2024-01-03T21:32:46Z) - TimeMAE: Self-Supervised Representations of Time Series with Decoupled
Masked Autoencoders [55.00904795497786]
We propose TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks.
The TimeMAE learns enriched contextual representations of time series with a bidirectional encoding scheme.
To solve the discrepancy issue incurred by newly injected masked embeddings, we design a decoupled autoencoder architecture.
arXiv Detail & Related papers (2023-03-01T08:33:16Z) - STING: Self-attention based Time-series Imputation Networks using GAN [4.052758394413726]
STING (Self-attention based Time-series Imputation Networks using GAN) is proposed.
We take advantage of generative adversarial networks and bidirectional recurrent neural networks to learn latent representations of the time series.
Experimental results on three real-world datasets demonstrate that STING outperforms the existing state-of-the-art methods in terms of imputation accuracy.
arXiv Detail & Related papers (2022-09-22T06:06:56Z) - Towards Generating Real-World Time Series Data [52.51620668470388]
We propose a novel generative framework for time series data generation - RTSGAN.
RTSGAN learns an encoder-decoder module which provides a mapping between a time series instance and a fixed-dimension latent vector.
To generate time series with missing values, we further equip RTSGAN with an observation embedding layer and a decide-and-generate decoder.
arXiv Detail & Related papers (2021-11-16T11:31:37Z) - Novel Features for Time Series Analysis: A Complex Networks Approach [62.997667081978825]
Time series data are ubiquitous in several domains as climate, economics and health care.
Recent conceptual approach relies on time series mapping to complex networks.
Network analysis can be used to characterize different types of time series.
arXiv Detail & Related papers (2021-10-11T13:46:28Z) - Time Series is a Special Sequence: Forecasting with Sample Convolution
and Interaction [9.449017120452675]
Time series is a special type of sequence data, a set of observations collected at even intervals of time and ordered chronologically.
Existing deep learning techniques use generic sequence models for time series analysis, which ignore some of its unique properties.
We propose a novel neural network architecture and apply it for the time series forecasting problem, wherein we conduct sample convolution and interaction at multiple resolutions for temporal modeling.
arXiv Detail & Related papers (2021-06-17T08:15:04Z) - Graph Attention Recurrent Neural Networks for Correlated Time Series
Forecasting -- Full version [16.22449727526222]
We consider a setting where multiple entities inter-act with each other over time and the time-varying statuses of the entities are represented as correlated time series.
To enable accurate forecasting on correlated time series, we proposes graph attention recurrent neural networks.
Experiments on a large real-world speed time series data set suggest that the proposed method is effective and outperforms the state-of-the-art in most settings.
arXiv Detail & Related papers (2021-03-19T12:15:37Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Time Series Data Augmentation for Neural Networks by Time Warping with a
Discriminative Teacher [17.20906062729132]
We propose a novel time series data augmentation called guided warping.
guided warping exploits the element alignment properties of Dynamic Time Warping (DTW) and shapeDTW.
We evaluate the method on all 85 datasets in the 2015 UCR Time Series Archive with a deep convolutional neural network (CNN) and a recurrent neural network (RNN)
arXiv Detail & Related papers (2020-04-19T06:33:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.