Large Scale Time-Series Representation Learning via Simultaneous Low and
High Frequency Feature Bootstrapping
- URL: http://arxiv.org/abs/2204.11291v2
- Date: Thu, 23 Nov 2023 10:16:54 GMT
- Title: Large Scale Time-Series Representation Learning via Simultaneous Low and
High Frequency Feature Bootstrapping
- Authors: Vandan Gorade, Azad Singh and Deepak Mishra
- Abstract summary: We propose a non-contrastive self-supervised learning approach efficiently captures low and high-frequency time-varying features.
Our method takes raw time series data as input and creates two different augmented views for two branches of the model.
To demonstrate the robustness of our model we performed extensive experiments and ablation studies on five real-world time-series datasets.
- Score: 7.0064929761691745
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Learning representation from unlabeled time series data is a challenging
problem. Most existing self-supervised and unsupervised approaches in the
time-series domain do not capture low and high-frequency features at the same
time. Further, some of these methods employ large scale models like
transformers or rely on computationally expensive techniques such as
contrastive learning. To tackle these problems, we propose a non-contrastive
self-supervised learning approach efficiently captures low and high-frequency
time-varying features in a cost-effective manner. Our method takes raw time
series data as input and creates two different augmented views for two branches
of the model, by randomly sampling the augmentations from same family.
Following the terminology of BYOL, the two branches are called online and
target network which allows bootstrapping of the latent representation. In
contrast to BYOL, where a backbone encoder is followed by multilayer perceptron
(MLP) heads, the proposed model contains additional temporal convolutional
network (TCN) heads. As the augmented views are passed through large kernel
convolution blocks of the encoder, the subsequent combination of MLP and TCN
enables an effective representation of low as well as high-frequency
time-varying features due to the varying receptive fields. The two modules (MLP
and TCN) act in a complementary manner. We train an online network where each
module learns to predict the outcome of the respective module of target network
branch. To demonstrate the robustness of our model we performed extensive
experiments and ablation studies on five real-world time-series datasets. Our
method achieved state-of-art performance on all five real-world datasets.
Related papers
- PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - TSLANet: Rethinking Transformers for Time Series Representation Learning [19.795353886621715]
Time series data is characterized by its intrinsic long and short-range dependencies.
We introduce a novel Time Series Lightweight Network (TSLANet) as a universal convolutional model for diverse time series tasks.
Our experiments demonstrate that TSLANet outperforms state-of-the-art models in various tasks spanning classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2024-04-12T13:41:29Z) - Improving Efficiency of Diffusion Models via Multi-Stage Framework and Tailored Multi-Decoder Architectures [12.703947839247693]
Diffusion models, emerging as powerful deep generative tools, excel in various applications.
However, their remarkable generative performance is hindered by slow training and sampling.
This is due to the necessity of tracking extensive forward and reverse diffusion trajectories.
We present a multi-stage framework inspired by our empirical findings to tackle these challenges.
arXiv Detail & Related papers (2023-12-14T17:48:09Z) - Parallel Learning by Multitasking Neural Networks [1.6799377888527685]
A modern challenge of Artificial Intelligence is learning multiple patterns at once.
We show how the Multitasking Hebbian Network is naturally able to perform this complex task.
arXiv Detail & Related papers (2023-08-08T07:43:31Z) - MTS2Graph: Interpretable Multivariate Time Series Classification with
Temporal Evolving Graphs [1.1756822700775666]
We introduce a new framework for interpreting time series data by extracting and clustering the input representative patterns.
We run experiments on eight datasets of the UCR/UEA archive, along with HAR and PAM datasets.
arXiv Detail & Related papers (2023-06-06T16:24:27Z) - TimeMAE: Self-Supervised Representations of Time Series with Decoupled
Masked Autoencoders [55.00904795497786]
We propose TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks.
The TimeMAE learns enriched contextual representations of time series with a bidirectional encoding scheme.
To solve the discrepancy issue incurred by newly injected masked embeddings, we design a decoupled autoencoder architecture.
arXiv Detail & Related papers (2023-03-01T08:33:16Z) - FormerTime: Hierarchical Multi-Scale Representations for Multivariate
Time Series Classification [53.55504611255664]
FormerTime is a hierarchical representation model for improving the classification capacity for the multivariate time series classification task.
It exhibits three aspects of merits: (1) learning hierarchical multi-scale representations from time series data, (2) inheriting the strength of both transformers and convolutional networks, and (3) tacking the efficiency challenges incurred by the self-attention mechanism.
arXiv Detail & Related papers (2023-02-20T07:46:14Z) - Gait Recognition in the Wild with Multi-hop Temporal Switch [81.35245014397759]
gait recognition in the wild is a more practical problem that has attracted the attention of the community of multimedia and computer vision.
This paper presents a novel multi-hop temporal switch method to achieve effective temporal modeling of gait patterns in real-world scenes.
arXiv Detail & Related papers (2022-09-01T10:46:09Z) - Global Filter Networks for Image Classification [90.81352483076323]
We present a conceptually simple yet computationally efficient architecture that learns long-term spatial dependencies in the frequency domain with log-linear complexity.
Our results demonstrate that GFNet can be a very competitive alternative to transformer-style models and CNNs in efficiency, generalization ability and robustness.
arXiv Detail & Related papers (2021-07-01T17:58:16Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z) - Conditional Mutual information-based Contrastive Loss for Financial Time
Series Forecasting [12.0855096102517]
We present a representation learning framework for financial time series forecasting.
In this paper, we propose to first learn compact representations from time series data, then use the learned representations to train a simpler model for predicting time series movements.
arXiv Detail & Related papers (2020-02-18T15:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.