Unsupervised Representation Learning for Time Series: A Review
- URL: http://arxiv.org/abs/2308.01578v1
- Date: Thu, 3 Aug 2023 07:28:06 GMT
- Title: Unsupervised Representation Learning for Time Series: A Review
- Authors: Qianwen Meng, Hangwei Qian, Yong Liu, Yonghui Xu, Zhiqi Shen, Lizhen
Cui
- Abstract summary: Unsupervised representation learning approaches aim to learn discriminative feature representations from unlabeled data, without the requirement of annotating every sample.
We conduct a literature review of existing rapidly evolving unsupervised representation learning approaches for time series.
We empirically evaluate state-of-the-art approaches, especially the rapidly evolving contrastive learning methods, on 9 diverse real-world datasets.
- Score: 20.00853543048447
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised representation learning approaches aim to learn discriminative
feature representations from unlabeled data, without the requirement of
annotating every sample. Enabling unsupervised representation learning is
extremely crucial for time series data, due to its unique annotation bottleneck
caused by its complex characteristics and lack of visual cues compared with
other data modalities. In recent years, unsupervised representation learning
techniques have advanced rapidly in various domains. However, there is a lack
of systematic analysis of unsupervised representation learning approaches for
time series. To fill the gap, we conduct a comprehensive literature review of
existing rapidly evolving unsupervised representation learning approaches for
time series. Moreover, we also develop a unified and standardized library,
named ULTS (i.e., Unsupervised Learning for Time Series), to facilitate fast
implementations and unified evaluations on various models. With ULTS, we
empirically evaluate state-of-the-art approaches, especially the rapidly
evolving contrastive learning methods, on 9 diverse real-world datasets. We
further discuss practical considerations as well as open research challenges on
unsupervised representation learning for time series to facilitate future
research in this field.
Related papers
- Self-Supervised Learning of Disentangled Representations for Multivariate Time-Series [10.99576829280084]
TimeDRL is a framework for multivariate time-series representation learning with dual-level disentangled embeddings.
TimeDRL features: (i) timestamp-level and instance-level embeddings using a [] token strategy; (ii) timestamp-predictive and instance-contrastive tasks for representation learning; and (iii) avoidance of augmentation methods to eliminate inductive biases.
Experiments on forecasting and classification datasets show TimeDRL outperforms existing methods, with further validation in semi-supervised settings with limited labeled data.
arXiv Detail & Related papers (2024-10-16T14:24:44Z) - Multi-Granularity Framework for Unsupervised Representation Learning of
Time Series [1.003058966910087]
This paper proposes an unsupervised framework to realize multi-granularity representation learning for time series.
Specifically, we employed a cross-granularity transformer to develop an association between fine- and coarse-grained representations.
In addition, we introduced a retrieval task as an unsupervised training task to learn the multi-granularity representation of time series.
arXiv Detail & Related papers (2023-12-12T13:25:32Z) - Latent Heterogeneous Graph Network for Incomplete Multi-View Learning [57.49776938934186]
We propose a novel Latent Heterogeneous Graph Network (LHGN) for incomplete multi-view learning.
By learning a unified latent representation, a trade-off between consistency and complementarity among different views is implicitly realized.
To avoid any inconsistencies between training and test phase, a transductive learning technique is applied based on graph learning for classification tasks.
arXiv Detail & Related papers (2022-08-29T15:14:21Z) - Semi-Supervised and Unsupervised Deep Visual Learning: A Survey [76.2650734930974]
Semi-supervised learning and unsupervised learning offer promising paradigms to learn from an abundance of unlabeled visual data.
We review the recent advanced deep learning algorithms on semi-supervised learning (SSL) and unsupervised learning (UL) for visual recognition from a unified perspective.
arXiv Detail & Related papers (2022-08-24T04:26:21Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Unsupervised Visual Time-Series Representation Learning and Clustering [2.610470075814367]
Time-series data is generated ubiquitously from Internet-of-Things infrastructure, connected and wearable devices, remote sensing, autonomous driving research and, audio-video communications.
This paper investigates the potential of unsupervised representation learning for these time-series.
arXiv Detail & Related papers (2021-11-19T16:44:33Z) - Rethinking the Representational Continuity: Towards Unsupervised
Continual Learning [45.440192267157094]
Unsupervised continual learning (UCL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge.
We show that reliance on annotated data is not necessary for continual learning.
We propose Lifelong Unsupervised Mixup (LUMP) to alleviate catastrophic forgetting for unsupervised representations.
arXiv Detail & Related papers (2021-10-13T18:38:06Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z) - Deep Partial Multi-View Learning [94.39367390062831]
We propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets)
We fifirst provide a formal defifinition of completeness and versatility for multi-view representation.
We then theoretically prove the versatility of the learned latent representations.
arXiv Detail & Related papers (2020-11-12T02:29:29Z) - A Sober Look at the Unsupervised Learning of Disentangled
Representations and their Evaluation [63.042651834453544]
We show that the unsupervised learning of disentangled representations is impossible without inductive biases on both the models and the data.
We observe that while the different methods successfully enforce properties "encouraged" by the corresponding losses, well-disentangled models seemingly cannot be identified without supervision.
Our results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision.
arXiv Detail & Related papers (2020-10-27T10:17:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.