Series2Vec: Similarity-based Self-supervised Representation Learning for
Time Series Classification
- URL: http://arxiv.org/abs/2312.03998v2
- Date: Tue, 12 Dec 2023 07:48:11 GMT
- Title: Series2Vec: Similarity-based Self-supervised Representation Learning for
Time Series Classification
- Authors: Navid Mohammadi Foumani, Chang Wei Tan, Geoffrey I. Webb, Hamid
Rezatofighi, Mahsa Salehi
- Abstract summary: We introduce a novel approach called textitSeries2Vec for self-supervised representation learning.
Series2Vec is trained to predict the similarity between two series in both temporal and spectral domains.
We show that Series2Vec performs comparably with fully supervised training and offers high efficiency in datasets with limited-labeled data.
- Score: 13.775977945756415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We argue that time series analysis is fundamentally different in nature to
either vision or natural language processing with respect to the forms of
meaningful self-supervised learning tasks that can be defined. Motivated by
this insight, we introduce a novel approach called \textit{Series2Vec} for
self-supervised representation learning. Unlike other self-supervised methods
in time series, which carry the risk of positive sample variants being less
similar to the anchor sample than series in the negative set, Series2Vec is
trained to predict the similarity between two series in both temporal and
spectral domains through a self-supervised task. Series2Vec relies primarily on
the consistency of the unsupervised similarity step, rather than the intrinsic
quality of the similarity measurement, without the need for hand-crafted data
augmentation. To further enforce the network to learn similar representations
for similar time series, we propose a novel approach that applies
order-invariant attention to each representation within the batch during
training. Our evaluation of Series2Vec on nine large real-world datasets, along
with the UCR/UEA archive, shows enhanced performance compared to current
state-of-the-art self-supervised techniques for time series. Additionally, our
extensive experiments show that Series2Vec performs comparably with fully
supervised training and offers high efficiency in datasets with limited-labeled
data. Finally, we show that the fusion of Series2Vec with other representation
learning models leads to enhanced performance for time series classification.
Code and models are open-source at
\url{https://github.com/Navidfoumani/Series2Vec.}
Related papers
- TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - A Co-training Approach for Noisy Time Series Learning [35.61140756248812]
We conduct co-training based contrastive learning iteratively to learn the encoders.
Our experiments demonstrate that this co-training approach leads to a significant improvement in performance.
Empirical evaluations on four time series benchmarks in unsupervised and semi-supervised settings reveal that TS-CoT outperforms existing methods.
arXiv Detail & Related papers (2023-08-24T04:33:30Z) - TimeMAE: Self-Supervised Representations of Time Series with Decoupled
Masked Autoencoders [55.00904795497786]
We propose TimeMAE, a novel self-supervised paradigm for learning transferrable time series representations based on transformer networks.
The TimeMAE learns enriched contextual representations of time series with a bidirectional encoding scheme.
To solve the discrepancy issue incurred by newly injected masked embeddings, we design a decoupled autoencoder architecture.
arXiv Detail & Related papers (2023-03-01T08:33:16Z) - Expressing Multivariate Time Series as Graphs with Time Series Attention
Transformer [14.172091921813065]
We propose the Time Series Attention Transformer (TSAT) for multivariate time series representation learning.
Using TSAT, we represent both temporal information and inter-dependencies of time series in terms of edge-enhanced dynamic graphs.
We show that TSAT clearly outerperforms six state-of-the-art baseline methods in various forecasting horizons.
arXiv Detail & Related papers (2022-08-19T12:25:56Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Towards Similarity-Aware Time-Series Classification [51.2400839966489]
We study time-series classification (TSC), a fundamental task of time-series data mining.
We propose Similarity-Aware Time-Series Classification (SimTSC), a framework that models similarity information with graph neural networks (GNNs)
arXiv Detail & Related papers (2022-01-05T02:14:57Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - Time-Series Representation Learning via Temporal and Contextual
Contrasting [14.688033556422337]
We propose an unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC)
First, the raw time-series data are transformed into two different yet correlated views by using weak and strong augmentations.
Second, we propose a novel temporal contrasting module to learn robust temporal representations by designing a tough cross-view prediction task.
Third, to further learn discriminative representations, we propose a contextual contrasting module built upon the contexts from the temporal contrasting module.
arXiv Detail & Related papers (2021-06-26T23:56:31Z) - Voice2Series: Reprogramming Acoustic Models for Time Series
Classification [65.94154001167608]
Voice2Series is a novel end-to-end approach that reprograms acoustic models for time series classification.
We show that V2S either outperforms or is tied with state-of-the-art methods on 20 tasks, and improves their average accuracy by 1.84%.
arXiv Detail & Related papers (2021-06-17T07:59:15Z) - A Deep Structural Model for Analyzing Correlated Multivariate Time
Series [11.009809732645888]
We present a deep learning structural time series model which can handle correlated multivariate time series input.
The model explicitly learns/extracts the trend, seasonality, and event components.
We compare our model with several state-of-the-art methods through a comprehensive set of experiments on a variety of time series data sets.
arXiv Detail & Related papers (2020-01-02T18:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.