Multi-Granularity Framework for Unsupervised Representation Learning of
Time Series
- URL: http://arxiv.org/abs/2312.07248v1
- Date: Tue, 12 Dec 2023 13:25:32 GMT
- Title: Multi-Granularity Framework for Unsupervised Representation Learning of
Time Series
- Authors: Chengyang Ye and Qiang Ma
- Abstract summary: This paper proposes an unsupervised framework to realize multi-granularity representation learning for time series.
Specifically, we employed a cross-granularity transformer to develop an association between fine- and coarse-grained representations.
In addition, we introduced a retrieval task as an unsupervised training task to learn the multi-granularity representation of time series.
- Score: 1.003058966910087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representation learning plays a critical role in the analysis of time series
data and has high practical value across a wide range of applications.
including trend analysis, time series data retrieval and forecasting. In
practice, data confusion is a significant issue as it can considerably impact
the effectiveness and accuracy of data analysis, machine learning models and
decision-making processes. In general, previous studies did not consider the
variability at various levels of granularity, thus resulting in inadequate
information utilization, which further exacerbated the issue of data confusion.
This paper proposes an unsupervised framework to realize multi-granularity
representation learning for time series. Specifically, we employed a
cross-granularity transformer to develop an association between fine- and
coarse-grained representations. In addition, we introduced a retrieval task as
an unsupervised training task to learn the multi-granularity representation of
time series. Moreover, a novel loss function was designed to obtain the
comprehensive multi-granularity representation of the time series via
unsupervised learning. The experimental results revealed that the proposed
framework demonstrates significant advantages over alternative representation
learning models.
Related papers
- Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Unsupervised Representation Learning for Time Series: A Review [20.00853543048447]
Unsupervised representation learning approaches aim to learn discriminative feature representations from unlabeled data, without the requirement of annotating every sample.
We conduct a literature review of existing rapidly evolving unsupervised representation learning approaches for time series.
We empirically evaluate state-of-the-art approaches, especially the rapidly evolving contrastive learning methods, on 9 diverse real-world datasets.
arXiv Detail & Related papers (2023-08-03T07:28:06Z) - Continual Vision-Language Representation Learning with Off-Diagonal
Information [112.39419069447902]
Multi-modal contrastive learning frameworks like CLIP typically require a large amount of image-text samples for training.
This paper discusses the feasibility of continual CLIP training using streaming data.
arXiv Detail & Related papers (2023-05-11T08:04:46Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Mixing Up Contrastive Learning: Self-Supervised Representation Learning
for Time Series [22.376529167056376]
We propose an unsupervised contrastive learning framework motivated from the perspective of label smoothing.
The proposed approach uses a novel contrastive loss that naturally exploits a data augmentation scheme.
Experiments demonstrate the framework's superior performance compared to other representation learning approaches.
arXiv Detail & Related papers (2022-03-17T11:49:21Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z) - Multi-Time Attention Networks for Irregularly Sampled Time Series [18.224344440110862]
Irregular sampling occurs in many time series modeling applications.
We propose a new deep learning framework for this setting that we call Multi-Time Attention Networks.
Our results show that our approach performs as well or better than a range of baseline and recently proposed models.
arXiv Detail & Related papers (2021-01-25T18:57:42Z) - Deep Partial Multi-View Learning [94.39367390062831]
We propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets)
We fifirst provide a formal defifinition of completeness and versatility for multi-view representation.
We then theoretically prove the versatility of the learned latent representations.
arXiv Detail & Related papers (2020-11-12T02:29:29Z) - A Transformer-based Framework for Multivariate Time Series
Representation Learning [12.12960851087613]
Pre-trained models can be potentially used for downstream tasks such as regression and classification, forecasting and missing value imputation.
We show that our modeling approach represents the most successful method employing unsupervised learning of multivariate time series presented to date.
We demonstrate that unsupervised pre-training of our transformer models offers a substantial performance benefit over fully supervised learning.
arXiv Detail & Related papers (2020-10-06T15:14:46Z) - Multivariate Time-series Anomaly Detection via Graph Attention Network [27.12694738711663]
Anomaly detection on multivariate time-series is of great importance in both data mining research and industrial applications.
One major limitation is that they do not capture the relationships between different time-series explicitly.
We propose a novel self-supervised framework for multivariate time-series anomaly detection to address this issue.
arXiv Detail & Related papers (2020-09-04T07:46:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.