A Co-training Approach for Noisy Time Series Learning
- URL: http://arxiv.org/abs/2308.12551v1
- Date: Thu, 24 Aug 2023 04:33:30 GMT
- Title: A Co-training Approach for Noisy Time Series Learning
- Authors: Weiqi Zhang, Jianfeng Zhang, Jia Li, Fugee Tsung
- Abstract summary: We conduct co-training based contrastive learning iteratively to learn the encoders.
Our experiments demonstrate that this co-training approach leads to a significant improvement in performance.
Empirical evaluations on four time series benchmarks in unsupervised and semi-supervised settings reveal that TS-CoT outperforms existing methods.
- Score: 35.61140756248812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we focus on robust time series representation learning. Our
assumption is that real-world time series is noisy and complementary
information from different views of the same time series plays an important
role while analyzing noisy input. Based on this, we create two views for the
input time series through two different encoders. We conduct co-training based
contrastive learning iteratively to learn the encoders. Our experiments
demonstrate that this co-training approach leads to a significant improvement
in performance. Especially, by leveraging the complementary information from
different views, our proposed TS-CoT method can mitigate the impact of data
noise and corruption. Empirical evaluations on four time series benchmarks in
unsupervised and semi-supervised settings reveal that TS-CoT outperforms
existing methods. Furthermore, the representations learned by TS-CoT can
transfer well to downstream tasks through fine-tuning.
Related papers
- TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Distillation Enhanced Time Series Forecasting Network with Momentum Contrastive Learning [7.4106801792345705]
We propose DE-TSMCL, an innovative distillation enhanced framework for long sequence time series forecasting.
Specifically, we design a learnable data augmentation mechanism which adaptively learns whether to mask a timestamp.
Then, we propose a contrastive learning task with momentum update to explore inter-sample and intra-temporal correlations of time series.
By developing model loss from multiple tasks, we can learn effective representations for downstream forecasting task.
arXiv Detail & Related papers (2024-01-31T12:52:10Z) - Soft Contrastive Learning for Time Series [5.752266579415516]
We propose SoftCLT, a simple yet effective soft contrastive learning strategy for time series.
Specifically, we define soft assignments for 1) instance-wise contrastive loss by the distance between time series on the data space, and 2) temporal contrastive loss by the difference of timestamps.
In experiments, we demonstrate that SoftCLT consistently improves the performance in various downstream tasks including classification, semi-supervised learning, transfer learning, and anomaly detection.
arXiv Detail & Related papers (2023-12-27T06:15:00Z) - Series2Vec: Similarity-based Self-supervised Representation Learning for
Time Series Classification [13.775977945756415]
We introduce a novel approach called textitSeries2Vec for self-supervised representation learning.
Series2Vec is trained to predict the similarity between two series in both temporal and spectral domains.
We show that Series2Vec performs comparably with fully supervised training and offers high efficiency in datasets with limited-labeled data.
arXiv Detail & Related papers (2023-12-07T02:30:40Z) - Improving Time Series Encoding with Noise-Aware Self-Supervised Learning and an Efficient Encoder [15.39384259348351]
We propose an innovative training strategy that promotes consistent representation learning, accounting for the presence of noise-prone signals in natural time series.
We also propose an encoder architecture that incorporates dilated convolution within the Inception block, resulting in a scalable and robust network with a wide receptive field.
arXiv Detail & Related papers (2023-06-11T04:00:11Z) - Towards Similarity-Aware Time-Series Classification [51.2400839966489]
We study time-series classification (TSC), a fundamental task of time-series data mining.
We propose Similarity-Aware Time-Series Classification (SimTSC), a framework that models similarity information with graph neural networks (GNNs)
arXiv Detail & Related papers (2022-01-05T02:14:57Z) - Time-Series Representation Learning via Temporal and Contextual
Contrasting [14.688033556422337]
We propose an unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC)
First, the raw time-series data are transformed into two different yet correlated views by using weak and strong augmentations.
Second, we propose a novel temporal contrasting module to learn robust temporal representations by designing a tough cross-view prediction task.
Third, to further learn discriminative representations, we propose a contextual contrasting module built upon the contexts from the temporal contrasting module.
arXiv Detail & Related papers (2021-06-26T23:56:31Z) - Voice2Series: Reprogramming Acoustic Models for Time Series
Classification [65.94154001167608]
Voice2Series is a novel end-to-end approach that reprograms acoustic models for time series classification.
We show that V2S either outperforms or is tied with state-of-the-art methods on 20 tasks, and improves their average accuracy by 1.84%.
arXiv Detail & Related papers (2021-06-17T07:59:15Z) - Self-supervised Co-training for Video Representation Learning [103.69904379356413]
We investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation training.
We propose a novel self-supervised co-training scheme to improve the popular infoNCE loss.
We evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval.
arXiv Detail & Related papers (2020-10-19T17:59:01Z) - SeCo: Exploring Sequence Supervision for Unsupervised Representation
Learning [114.58986229852489]
In this paper, we explore the basic and generic supervision in the sequence from spatial, sequential and temporal perspectives.
We derive a particular form named Contrastive Learning (SeCo)
SeCo shows superior results under the linear protocol on action recognition, untrimmed activity recognition and object tracking.
arXiv Detail & Related papers (2020-08-03T15:51:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.