Parametric Augmentation for Time Series Contrastive Learning
- URL: http://arxiv.org/abs/2402.10434v1
- Date: Fri, 16 Feb 2024 03:51:14 GMT
- Title: Parametric Augmentation for Time Series Contrastive Learning
- Authors: Xu Zheng, Tianchun Wang, Wei Cheng, Aitian Ma, Haifeng Chen, Mo Sha,
Dongsheng Luo
- Abstract summary: We create positive examples that assist the model in learning robust and discriminative representations.
Usually, preset human intuition directs the selection of relevant data augmentations.
We propose a contrastive learning framework with parametric augmentation, AutoTCL, which can be adaptively employed to support time series representation learning.
- Score: 33.47157775532995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern techniques like contrastive learning have been effectively used in
many areas, including computer vision, natural language processing, and
graph-structured data. Creating positive examples that assist the model in
learning robust and discriminative representations is a crucial stage in
contrastive learning approaches. Usually, preset human intuition directs the
selection of relevant data augmentations. Due to patterns that are easily
recognized by humans, this rule of thumb works well in the vision and language
domains. However, it is impractical to visually inspect the temporal structures
in time series. The diversity of time series augmentations at both the dataset
and instance levels makes it difficult to choose meaningful augmentations on
the fly. In this study, we address this gap by analyzing time series data
augmentation using information theory and summarizing the most commonly adopted
augmentations in a unified format. We then propose a contrastive learning
framework with parametric augmentation, AutoTCL, which can be adaptively
employed to support time series representation learning. The proposed approach
is encoder-agnostic, allowing it to be seamlessly integrated with different
backbone encoders. Experiments on univariate forecasting tasks demonstrate the
highly competitive results of our method, with an average 6.5\% reduction in
MSE and 4.7\% in MAE over the leading baselines. In classification tasks,
AutoTCL achieves a $1.2\%$ increase in average accuracy.
Related papers
- Contrastive Difference Predictive Coding [79.74052624853303]
We introduce a temporal difference version of contrastive predictive coding that stitches together pieces of different time series data to decrease the amount of data required to learn predictions of future events.
We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL.
arXiv Detail & Related papers (2023-10-31T03:16:32Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - The Trade-off between Universality and Label Efficiency of
Representations from Contrastive Learning [32.15608637930748]
We show that there exists a trade-off between the two desiderata so that one may not be able to achieve both simultaneously.
We provide analysis using a theoretical data model and show that, while more diverse pre-training data result in more diverse features for different tasks, it puts less emphasis on task-specific features.
arXiv Detail & Related papers (2023-02-28T22:14:33Z) - LEAVES: Learning Views for Time-Series Data in Contrastive Learning [16.84326709739788]
We propose a module for automating view generation for time-series data in contrastive learning, named learning views for time-series data (LEAVES)
The proposed method is more effective in finding reasonable views and performs downstream tasks better than the baselines.
arXiv Detail & Related papers (2022-10-13T20:18:22Z) - Automatic Data Augmentation via Invariance-Constrained Learning [94.27081585149836]
Underlying data structures are often exploited to improve the solution of learning tasks.
Data augmentation induces these symmetries during training by applying multiple transformations to the input data.
This work tackles these issues by automatically adapting the data augmentation while solving the learning task.
arXiv Detail & Related papers (2022-09-29T18:11:01Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - PSEUDo: Interactive Pattern Search in Multivariate Time Series with
Locality-Sensitive Hashing and Relevance Feedback [3.347485580830609]
PSEUDo is an adaptive feature learning technique for exploring visual patterns in multi-track sequential data.
Our algorithm features sub-linear training and inference time.
We demonstrate superiority of PSEUDo in terms of efficiency, accuracy, and steerability.
arXiv Detail & Related papers (2021-04-30T13:00:44Z) - i-Mix: A Domain-Agnostic Strategy for Contrastive Representation
Learning [117.63815437385321]
We propose i-Mix, a simple yet effective domain-agnostic regularization strategy for improving contrastive representation learning.
In experiments, we demonstrate that i-Mix consistently improves the quality of learned representations across domains.
arXiv Detail & Related papers (2020-10-17T23:32:26Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Conditional Mutual information-based Contrastive Loss for Financial Time
Series Forecasting [12.0855096102517]
We present a representation learning framework for financial time series forecasting.
In this paper, we propose to first learn compact representations from time series data, then use the learned representations to train a simpler model for predicting time series movements.
arXiv Detail & Related papers (2020-02-18T15:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.