Mixing Up Contrastive Learning: Self-Supervised Representation Learning
for Time Series
- URL: http://arxiv.org/abs/2203.09270v1
- Date: Thu, 17 Mar 2022 11:49:21 GMT
- Title: Mixing Up Contrastive Learning: Self-Supervised Representation Learning
for Time Series
- Authors: Kristoffer Wickstr{\o}m and Michael Kampffmeyer and Karl {\O}yvind
Mikalsen and Robert Jenssen
- Abstract summary: We propose an unsupervised contrastive learning framework motivated from the perspective of label smoothing.
The proposed approach uses a novel contrastive loss that naturally exploits a data augmentation scheme.
Experiments demonstrate the framework's superior performance compared to other representation learning approaches.
- Score: 22.376529167056376
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The lack of labeled data is a key challenge for learning useful
representation from time series data. However, an unsupervised representation
framework that is capable of producing high quality representations could be of
great value. It is key to enabling transfer learning, which is especially
beneficial for medical applications, where there is an abundance of data but
labeling is costly and time consuming. We propose an unsupervised contrastive
learning framework that is motivated from the perspective of label smoothing.
The proposed approach uses a novel contrastive loss that naturally exploits a
data augmentation scheme in which new samples are generated by mixing two data
samples with a mixing component. The task in the proposed framework is to
predict the mixing component, which is utilized as soft targets in the loss
function. Experiments demonstrate the framework's superior performance compared
to other representation learning approaches on both univariate and multivariate
time series and illustrate its benefits for transfer learning for clinical time
series.
Related papers
- Distillation Enhanced Time Series Forecasting Network with Momentum Contrastive Learning [7.4106801792345705]
We propose DE-TSMCL, an innovative distillation enhanced framework for long sequence time series forecasting.
Specifically, we design a learnable data augmentation mechanism which adaptively learns whether to mask a timestamp.
Then, we propose a contrastive learning task with momentum update to explore inter-sample and intra-temporal correlations of time series.
By developing model loss from multiple tasks, we can learn effective representations for downstream forecasting task.
arXiv Detail & Related papers (2024-01-31T12:52:10Z) - Dual-View Data Hallucination with Semantic Relation Guidance for Few-Shot Image Recognition [49.26065739704278]
We propose a framework that exploits semantic relations to guide dual-view data hallucination for few-shot image recognition.
An instance-view data hallucination module hallucinates each sample of a novel class to generate new data.
A prototype-view data hallucination module exploits semantic-aware measure to estimate the prototype of a novel class.
arXiv Detail & Related papers (2024-01-13T12:32:29Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - Multi-Task Self-Supervised Time-Series Representation Learning [3.31490164885582]
Time-series representation learning can extract representations from data with temporal dynamics and sparse labels.
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks.
We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2023-03-02T07:44:06Z) - The Trade-off between Universality and Label Efficiency of
Representations from Contrastive Learning [32.15608637930748]
We show that there exists a trade-off between the two desiderata so that one may not be able to achieve both simultaneously.
We provide analysis using a theoretical data model and show that, while more diverse pre-training data result in more diverse features for different tasks, it puts less emphasis on task-specific features.
arXiv Detail & Related papers (2023-02-28T22:14:33Z) - DyTed: Disentangled Representation Learning for Discrete-time Dynamic
Graph [59.583555454424]
We propose a novel disenTangled representation learning framework for discrete-time Dynamic graphs, namely DyTed.
We specially design a temporal-clips contrastive learning task together with a structure contrastive learning to effectively identify the time-invariant and time-varying representations respectively.
arXiv Detail & Related papers (2022-10-19T14:34:12Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - A Framework using Contrastive Learning for Classification with Noisy
Labels [1.2891210250935146]
We propose a framework using contrastive learning as a pre-training task to perform image classification in the presence of noisy labels.
Recent strategies such as pseudo-labeling, sample selection with Gaussian Mixture models, weighted supervised contrastive learning have been combined into a fine-tuning phase following the pre-training.
arXiv Detail & Related papers (2021-04-19T18:51:22Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.