Time-Series Representation Learning via Temporal and Contextual
Contrasting
- URL: http://arxiv.org/abs/2106.14112v1
- Date: Sat, 26 Jun 2021 23:56:31 GMT
- Title: Time-Series Representation Learning via Temporal and Contextual
Contrasting
- Authors: Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong
Kwoh, Xiaoli Li and Cuntai Guan
- Abstract summary: We propose an unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC)
First, the raw time-series data are transformed into two different yet correlated views by using weak and strong augmentations.
Second, we propose a novel temporal contrasting module to learn robust temporal representations by designing a tough cross-view prediction task.
Third, to further learn discriminative representations, we propose a contextual contrasting module built upon the contexts from the temporal contrasting module.
- Score: 14.688033556422337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning decent representations from unlabeled time-series data with temporal
dynamics is a very challenging task. In this paper, we propose an unsupervised
Time-Series representation learning framework via Temporal and Contextual
Contrasting (TS-TCC), to learn time-series representation from unlabeled data.
First, the raw time-series data are transformed into two different yet
correlated views by using weak and strong augmentations. Second, we propose a
novel temporal contrasting module to learn robust temporal representations by
designing a tough cross-view prediction task. Last, to further learn
discriminative representations, we propose a contextual contrasting module
built upon the contexts from the temporal contrasting module. It attempts to
maximize the similarity among different contexts of the same sample while
minimizing similarity among contexts of different samples. Experiments have
been carried out on three real-world time-series datasets. The results manifest
that training a linear classifier on top of the features learned by our
proposed TS-TCC performs comparably with the supervised training. Additionally,
our proposed TS-TCC shows high efficiency in few-labeled data and transfer
learning scenarios. The code is publicly available at
https://github.com/emadeldeen24/TS-TCC.
Related papers
- TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Soft Contrastive Learning for Time Series [5.752266579415516]
We propose SoftCLT, a simple yet effective soft contrastive learning strategy for time series.
Specifically, we define soft assignments for 1) instance-wise contrastive loss by the distance between time series on the data space, and 2) temporal contrastive loss by the difference of timestamps.
In experiments, we demonstrate that SoftCLT consistently improves the performance in various downstream tasks including classification, semi-supervised learning, transfer learning, and anomaly detection.
arXiv Detail & Related papers (2023-12-27T06:15:00Z) - Contrastive Difference Predictive Coding [79.74052624853303]
We introduce a temporal difference version of contrastive predictive coding that stitches together pieces of different time series data to decrease the amount of data required to learn predictions of future events.
We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL.
arXiv Detail & Related papers (2023-10-31T03:16:32Z) - Graph-Aware Contrasting for Multivariate Time-Series Classification [50.84488941336865]
Existing contrastive learning methods mainly focus on achieving temporal consistency with temporal augmentation and contrasting techniques.
We propose Graph-Aware Contrasting for spatial consistency across MTS data.
Our proposed method achieves state-of-the-art performance on various MTS classification tasks.
arXiv Detail & Related papers (2023-09-11T02:35:22Z) - Multi-Task Self-Supervised Time-Series Representation Learning [3.31490164885582]
Time-series representation learning can extract representations from data with temporal dynamics and sparse labels.
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks.
We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2023-03-02T07:44:06Z) - Self-supervised Contrastive Representation Learning for Semi-supervised
Time-Series Classification [25.37700142906292]
We propose a novel Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC)
Specifically, we propose time-series-specific weak and strong augmentations and use their views to learn robust temporal relations.
We also extend TS-TCC to the semi-supervised learning settings and propose a Class-Aware TS-TCC (CA-TCC) that benefits from the available few labeled data.
arXiv Detail & Related papers (2022-08-13T10:22:12Z) - Towards Similarity-Aware Time-Series Classification [51.2400839966489]
We study time-series classification (TSC), a fundamental task of time-series data mining.
We propose Similarity-Aware Time-Series Classification (SimTSC), a framework that models similarity information with graph neural networks (GNNs)
arXiv Detail & Related papers (2022-01-05T02:14:57Z) - Contrastive Spatio-Temporal Pretext Learning for Self-supervised Video
Representation [16.643709221279764]
We propose a novel pretext task -temporal overlap rate (STOR) prediction.
It stems from observation that humans are capable of discriminating overlap rates of videos in space and time.
We employ a joint task combining contrastive learning to further the enhance-temporal representation learning.
arXiv Detail & Related papers (2021-12-16T14:31:22Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z) - A Closer Look at Temporal Sentence Grounding in Videos: Datasets and
Metrics [70.45937234489044]
We re- organize two widely-used TSGV datasets (Charades-STA and ActivityNet Captions) to make it different from the training split.
We introduce a new evaluation metric "dR@$n$,IoU@$m$" to calibrate the basic IoU scores.
All the results demonstrate that the re-organized datasets and new metric can better monitor the progress in TSGV.
arXiv Detail & Related papers (2021-01-22T09:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.