Self-supervised Contrastive Representation Learning for Semi-supervised
Time-Series Classification
- URL: http://arxiv.org/abs/2208.06616v3
- Date: Sun, 3 Sep 2023 00:45:01 GMT
- Title: Self-supervised Contrastive Representation Learning for Semi-supervised
Time-Series Classification
- Authors: Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong
Kwoh, Xiaoli Li and Cuntai Guan
- Abstract summary: We propose a novel Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC)
Specifically, we propose time-series-specific weak and strong augmentations and use their views to learn robust temporal relations.
We also extend TS-TCC to the semi-supervised learning settings and propose a Class-Aware TS-TCC (CA-TCC) that benefits from the available few labeled data.
- Score: 25.37700142906292
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning time-series representations when only unlabeled data or few labeled
samples are available can be a challenging task. Recently, contrastive
self-supervised learning has shown great improvement in extracting useful
representations from unlabeled data via contrasting different augmented views
of data. In this work, we propose a novel Time-Series representation learning
framework via Temporal and Contextual Contrasting (TS-TCC) that learns
representations from unlabeled data with contrastive learning. Specifically, we
propose time-series-specific weak and strong augmentations and use their views
to learn robust temporal relations in the proposed temporal contrasting module,
besides learning discriminative representations by our proposed contextual
contrasting module. Additionally, we conduct a systematic study of time-series
data augmentation selection, which is a key part of contrastive learning. We
also extend TS-TCC to the semi-supervised learning settings and propose a
Class-Aware TS-TCC (CA-TCC) that benefits from the available few labeled data
to further improve representations learned by TS-TCC. Specifically, we leverage
the robust pseudo labels produced by TS-TCC to realize a class-aware
contrastive loss. Extensive experiments show that the linear evaluation of the
features learned by our proposed framework performs comparably with the fully
supervised training. Additionally, our framework shows high efficiency in the
few labeled data and transfer learning scenarios. The code is publicly
available at \url{https://github.com/emadeldeen24/CA-TCC}.
Related papers
- Parametric Augmentation for Time Series Contrastive Learning [33.47157775532995]
We create positive examples that assist the model in learning robust and discriminative representations.
Usually, preset human intuition directs the selection of relevant data augmentations.
We propose a contrastive learning framework with parametric augmentation, AutoTCL, which can be adaptively employed to support time series representation learning.
arXiv Detail & Related papers (2024-02-16T03:51:14Z) - Distillation Enhanced Time Series Forecasting Network with Momentum Contrastive Learning [7.4106801792345705]
We propose DE-TSMCL, an innovative distillation enhanced framework for long sequence time series forecasting.
Specifically, we design a learnable data augmentation mechanism which adaptively learns whether to mask a timestamp.
Then, we propose a contrastive learning task with momentum update to explore inter-sample and intra-temporal correlations of time series.
By developing model loss from multiple tasks, we can learn effective representations for downstream forecasting task.
arXiv Detail & Related papers (2024-01-31T12:52:10Z) - Soft Contrastive Learning for Time Series [5.752266579415516]
We propose SoftCLT, a simple yet effective soft contrastive learning strategy for time series.
Specifically, we define soft assignments for 1) instance-wise contrastive loss by the distance between time series on the data space, and 2) temporal contrastive loss by the difference of timestamps.
In experiments, we demonstrate that SoftCLT consistently improves the performance in various downstream tasks including classification, semi-supervised learning, transfer learning, and anomaly detection.
arXiv Detail & Related papers (2023-12-27T06:15:00Z) - SMC-NCA: Semantic-guided Multi-level Contrast for Semi-supervised Temporal Action Segmentation [53.010417880335424]
Semi-supervised temporal action segmentation (SS-TA) aims to perform frame-wise classification in long untrimmed videos.
Recent studies have shown the potential of contrastive learning in unsupervised representation learning using unlabelled data.
We propose a novel Semantic-guided Multi-level Contrast scheme with a Neighbourhood-Consistency-Aware unit (SMC-NCA) to extract strong frame-wise representations.
arXiv Detail & Related papers (2023-12-19T17:26:44Z) - Graph-Aware Contrasting for Multivariate Time-Series Classification [50.84488941336865]
Existing contrastive learning methods mainly focus on achieving temporal consistency with temporal augmentation and contrasting techniques.
We propose Graph-Aware Contrasting for spatial consistency across MTS data.
Our proposed method achieves state-of-the-art performance on various MTS classification tasks.
arXiv Detail & Related papers (2023-09-11T02:35:22Z) - TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning [73.53576440536682]
We introduce TACO: Temporal Action-driven Contrastive Learning, a powerful temporal contrastive learning approach.
TACO simultaneously learns a state and an action representation by optimizing the mutual information between representations of current states.
For online RL, TACO achieves 40% performance boost after one million environment interaction steps.
arXiv Detail & Related papers (2023-06-22T22:21:53Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - Mixing Up Contrastive Learning: Self-Supervised Representation Learning
for Time Series [22.376529167056376]
We propose an unsupervised contrastive learning framework motivated from the perspective of label smoothing.
The proposed approach uses a novel contrastive loss that naturally exploits a data augmentation scheme.
Experiments demonstrate the framework's superior performance compared to other representation learning approaches.
arXiv Detail & Related papers (2022-03-17T11:49:21Z) - Towards Similarity-Aware Time-Series Classification [51.2400839966489]
We study time-series classification (TSC), a fundamental task of time-series data mining.
We propose Similarity-Aware Time-Series Classification (SimTSC), a framework that models similarity information with graph neural networks (GNNs)
arXiv Detail & Related papers (2022-01-05T02:14:57Z) - Time-Series Representation Learning via Temporal and Contextual
Contrasting [14.688033556422337]
We propose an unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC)
First, the raw time-series data are transformed into two different yet correlated views by using weak and strong augmentations.
Second, we propose a novel temporal contrasting module to learn robust temporal representations by designing a tough cross-view prediction task.
Third, to further learn discriminative representations, we propose a contextual contrasting module built upon the contexts from the temporal contrasting module.
arXiv Detail & Related papers (2021-06-26T23:56:31Z) - Temporal Contrastive Graph Learning for Video Action Recognition and
Retrieval [83.56444443849679]
This work takes advantage of the temporal dependencies within videos and proposes a novel self-supervised method named Temporal Contrastive Graph Learning (TCGL)
Our TCGL roots in a hybrid graph contrastive learning strategy to jointly regard the inter-snippet and intra-snippet temporal dependencies as self-supervision signals for temporal representation learning.
Experimental results demonstrate the superiority of our TCGL over the state-of-the-art methods on large-scale action recognition and video retrieval benchmarks.
arXiv Detail & Related papers (2021-01-04T08:11:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.