Time Series Contrastive Learning with Information-Aware Augmentations
- URL: http://arxiv.org/abs/2303.11911v1
- Date: Tue, 21 Mar 2023 15:02:50 GMT
- Title: Time Series Contrastive Learning with Information-Aware Augmentations
- Authors: Dongsheng Luo, Wei Cheng, Yingheng Wang, Dongkuan Xu, Jingchao Ni,
Wenchao Yu, Xuchao Zhang, Yanchi Liu, Yuncong Chen, Haifeng Chen, Xiang Zhang
- Abstract summary: A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
- Score: 57.45139904366001
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various contrastive learning approaches have been proposed in recent years
and achieve significant empirical success. While effective and prevalent,
contrastive learning has been less explored for time series data. A key
component of contrastive learning is to select appropriate augmentations
imposing some priors to construct feasible positive samples, such that an
encoder can be trained to learn robust and discriminative representations.
Unlike image and language domains where ``desired'' augmented samples can be
generated with the rule of thumb guided by prefabricated human priors, the
ad-hoc manual selection of time series augmentations is hindered by their
diverse and human-unrecognizable temporal structures. How to find the desired
augmentations of time series data that are meaningful for given contrastive
learning tasks and datasets remains an open question. In this work, we address
the problem by encouraging both high \textit{fidelity} and \textit{variety}
based upon information theory. A theoretical analysis leads to the criteria for
selecting feasible data augmentations. On top of that, we propose a new
contrastive learning approach with information-aware augmentations, InfoTS,
that adaptively selects optimal augmentations for time series representation
learning. Experiments on various datasets show highly competitive performance
with up to 12.0\% reduction in MSE on forecasting tasks and up to 3.7\%
relative improvement in accuracy on classification tasks over the leading
baselines.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Guidelines for Augmentation Selection in Contrastive Learning for Time Series Classification [7.712601563682029]
We establish a principled framework for selecting augmentations based on dataset characteristics such as trend and seasonality.
We then evaluate the effectiveness of 8 different augmentations across 12 synthetic datasets and 6 real-world datasets.
Our proposed trend-seasonality-based augmentation recommendation algorithm can accurately identify the effective augmentations for a given time series dataset.
arXiv Detail & Related papers (2024-07-12T15:13:16Z) - Parametric Augmentation for Time Series Contrastive Learning [33.47157775532995]
We create positive examples that assist the model in learning robust and discriminative representations.
Usually, preset human intuition directs the selection of relevant data augmentations.
We propose a contrastive learning framework with parametric augmentation, AutoTCL, which can be adaptively employed to support time series representation learning.
arXiv Detail & Related papers (2024-02-16T03:51:14Z) - Distillation Enhanced Time Series Forecasting Network with Momentum Contrastive Learning [7.4106801792345705]
We propose DE-TSMCL, an innovative distillation enhanced framework for long sequence time series forecasting.
Specifically, we design a learnable data augmentation mechanism which adaptively learns whether to mask a timestamp.
Then, we propose a contrastive learning task with momentum update to explore inter-sample and intra-temporal correlations of time series.
By developing model loss from multiple tasks, we can learn effective representations for downstream forecasting task.
arXiv Detail & Related papers (2024-01-31T12:52:10Z) - One-Shot Learning as Instruction Data Prospector for Large Language Models [108.81681547472138]
textscNuggets uses one-shot learning to select high-quality instruction data from extensive datasets.
We show that instruction tuning with the top 1% of examples curated by textscNuggets substantially outperforms conventional methods employing the entire dataset.
arXiv Detail & Related papers (2023-12-16T03:33:12Z) - Contrastive Difference Predictive Coding [79.74052624853303]
We introduce a temporal difference version of contrastive predictive coding that stitches together pieces of different time series data to decrease the amount of data required to learn predictions of future events.
We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL.
arXiv Detail & Related papers (2023-10-31T03:16:32Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Mixing Up Contrastive Learning: Self-Supervised Representation Learning
for Time Series [22.376529167056376]
We propose an unsupervised contrastive learning framework motivated from the perspective of label smoothing.
The proposed approach uses a novel contrastive loss that naturally exploits a data augmentation scheme.
Experiments demonstrate the framework's superior performance compared to other representation learning approaches.
arXiv Detail & Related papers (2022-03-17T11:49:21Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Time Series Data Augmentation for Deep Learning: A Survey [35.2161833151567]
We systematically review different data augmentation methods for time series data.
We empirically compare different data augmentation methods for different tasks including time series classification, anomaly detection, and forecasting.
arXiv Detail & Related papers (2020-02-27T23:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.