TimesURL: Self-supervised Contrastive Learning for Universal Time Series
Representation Learning
- URL: http://arxiv.org/abs/2312.15709v1
- Date: Mon, 25 Dec 2023 12:23:26 GMT
- Title: TimesURL: Self-supervised Contrastive Learning for Universal Time Series
Representation Learning
- Authors: Jiexi Liu, Songcan Chen
- Abstract summary: We propose a novel self-supervised framework named TimesURL to tackle time series representation.
Specifically, we first introduce a frequency-temporal-based augmentation to keep the temporal property unchanged.
We also construct double Universums as a special kind of hard negative to guide better contrastive learning.
- Score: 31.458689807334228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning universal time series representations applicable to various types of
downstream tasks is challenging but valuable in real applications. Recently,
researchers have attempted to leverage the success of self-supervised
contrastive learning (SSCL) in Computer Vision(CV) and Natural Language
Processing(NLP) to tackle time series representation. Nevertheless, due to the
special temporal characteristics, relying solely on empirical guidance from
other domains may be ineffective for time series and difficult to adapt to
multiple downstream tasks. To this end, we review three parts involved in SSCL
including 1) designing augmentation methods for positive pairs, 2) constructing
(hard) negative pairs, and 3) designing SSCL loss. For 1) and 2), we find that
unsuitable positive and negative pair construction may introduce inappropriate
inductive biases, which neither preserve temporal properties nor provide
sufficient discriminative features. For 3), just exploring segment- or
instance-level semantics information is not enough for learning universal
representation. To remedy the above issues, we propose a novel self-supervised
framework named TimesURL. Specifically, we first introduce a
frequency-temporal-based augmentation to keep the temporal property unchanged.
And then, we construct double Universums as a special kind of hard negative to
guide better contrastive learning. Additionally, we introduce time
reconstruction as a joint optimization objective with contrastive learning to
capture both segment-level and instance-level information. As a result,
TimesURL can learn high-quality universal representations and achieve
state-of-the-art performance in 6 different downstream tasks, including short-
and long-term forecasting, imputation, classification, anomaly detection and
transfer learning.
Related papers
- Dynamic Contrastive Learning for Time Series Representation [6.086030037869592]
We propose DynaCL, an unsupervised contrastive representation learning framework for time series.
We demonstrate that DynaCL embeds instances from time series into semantically meaningful clusters.
Our findings also reveal that high scores on unsupervised clustering metrics do not guarantee that the representations are useful in downstream tasks.
arXiv Detail & Related papers (2024-10-20T15:20:24Z) - Deeply-Coupled Convolution-Transformer with Spatial-temporal
Complementary Learning for Video-based Person Re-identification [91.56939957189505]
We propose a novel spatial-temporal complementary learning framework named Deeply-Coupled Convolution-Transformer (DCCT) for high-performance video-based person Re-ID.
Our framework could attain better performances than most state-of-the-art methods.
arXiv Detail & Related papers (2023-04-27T12:16:44Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - Multi-Task Self-Supervised Time-Series Representation Learning [3.31490164885582]
Time-series representation learning can extract representations from data with temporal dynamics and sparse labels.
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks.
We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2023-03-02T07:44:06Z) - Self-Promoted Supervision for Few-Shot Transformer [178.52948452353834]
Self-promoted sUpervisioN (SUN) is a few-shot learning framework for vision transformers (ViTs)
SUN pretrains the ViT on the few-shot learning dataset and then uses it to generate individual location-specific supervision for guiding each patch token.
Experiments show that SUN using ViTs significantly surpasses other few-shot learning frameworks with ViTs and is the first one that achieves higher performance than those CNN state-of-the-arts.
arXiv Detail & Related papers (2022-03-14T12:53:27Z) - Unsupervised Time-Series Representation Learning with Iterative Bilinear
Temporal-Spectral Fusion [6.154427471704388]
We propose a unified framework, namely Bilinear Temporal-Spectral Fusion (BTSF)
Specifically, we utilize the instance-level augmentation with a simple dropout on the entire time series for maximally capturing long-term dependencies.
We devise a novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs.
arXiv Detail & Related papers (2022-02-08T14:04:08Z) - Exploring Temporal Granularity in Self-Supervised Video Representation
Learning [99.02421058335533]
This work presents a self-supervised learning framework named TeG to explore Temporal Granularity in learning video representations.
The flexibility of TeG gives rise to state-of-the-art results on 8 video benchmarks, outperforming supervised pre-training in most cases.
arXiv Detail & Related papers (2021-12-08T18:58:42Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z) - SeCo: Exploring Sequence Supervision for Unsupervised Representation
Learning [114.58986229852489]
In this paper, we explore the basic and generic supervision in the sequence from spatial, sequential and temporal perspectives.
We derive a particular form named Contrastive Learning (SeCo)
SeCo shows superior results under the linear protocol on action recognition, untrimmed activity recognition and object tracking.
arXiv Detail & Related papers (2020-08-03T15:51:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.