Self-Supervised Time Series Representation Learning by Inter-Intra
Relational Reasoning
- URL: http://arxiv.org/abs/2011.13548v1
- Date: Fri, 27 Nov 2020 04:04:17 GMT
- Title: Self-Supervised Time Series Representation Learning by Inter-Intra
Relational Reasoning
- Authors: Haoyi Fan, Fengbin Zhang, Yue Gao
- Abstract summary: We present SelfTime: a general self-supervised time series representation learning framework.
We explore the inter-sample relation and intra-temporal relation of time series to learn the underlying structure feature on the unlabeled time series.
The useful representations of time series are extracted from the backbone under the supervision of relation reasoning heads.
- Score: 18.72937677485634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-supervised learning achieves superior performance in many domains by
extracting useful representations from the unlabeled data. However, most of
traditional self-supervised methods mainly focus on exploring the inter-sample
structure while less efforts have been concentrated on the underlying
intra-temporal structure, which is important for time series data. In this
paper, we present SelfTime: a general self-supervised time series
representation learning framework, by exploring the inter-sample relation and
intra-temporal relation of time series to learn the underlying structure
feature on the unlabeled time series. Specifically, we first generate the
inter-sample relation by sampling positive and negative samples of a given
anchor sample, and intra-temporal relation by sampling time pieces from this
anchor. Then, based on the sampled relation, a shared feature extraction
backbone combined with two separate relation reasoning heads are employed to
quantify the relationships of the sample pairs for inter-sample relation
reasoning, and the relationships of the time piece pairs for intra-temporal
relation reasoning, respectively. Finally, the useful representations of time
series are extracted from the backbone under the supervision of relation
reasoning heads. Experimental results on multiple real-world time series
datasets for time series classification task demonstrate the effectiveness of
the proposed method. Code and data are publicly available at
https://haoyfan.github.io/.
Related papers
- Capturing Temporal Components for Time Series Classification [5.70772577110828]
This work introduces a textitcompositional representation learning approach trained on statistically coherent components extracted from sequential data.
Based on a multi-scale change space, an unsupervised approach is proposed to segment the sequential data into chunks with similar statistical properties.
A sequence-based encoder model is trained in a multi-task setting to learn compositional representations from these temporal components for time series classification.
arXiv Detail & Related papers (2024-06-20T16:15:21Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Robust Detection of Lead-Lag Relationships in Lagged Multi-Factor Models [61.10851158749843]
Key insights can be obtained by discovering lead-lag relationships inherent in the data.
We develop a clustering-driven methodology for robust detection of lead-lag relationships in lagged multi-factor models.
arXiv Detail & Related papers (2023-05-11T10:30:35Z) - Multi-Task Self-Supervised Time-Series Representation Learning [3.31490164885582]
Time-series representation learning can extract representations from data with temporal dynamics and sparse labels.
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks.
We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2023-03-02T07:44:06Z) - Self-Attention Neural Bag-of-Features [103.70855797025689]
We build on the recently introduced 2D-Attention and reformulate the attention learning methodology.
We propose a joint feature-temporal attention mechanism that learns a joint 2D attention mask highlighting relevant information.
arXiv Detail & Related papers (2022-01-26T17:54:14Z) - Cluster-and-Conquer: A Framework For Time-Series Forecasting [94.63501563413725]
We propose a three-stage framework for forecasting high-dimensional time-series data.
Our framework is highly general, allowing for any time-series forecasting and clustering method to be used in each step.
When instantiated with simple linear autoregressive models, we are able to achieve state-of-the-art results on several benchmark datasets.
arXiv Detail & Related papers (2021-10-26T20:41:19Z) - One-shot Learning for Temporal Knowledge Graphs [49.41854171118697]
We propose a one-shot learning framework for link prediction in temporal knowledge graphs.
Our proposed method employs a self-attention mechanism to effectively encode temporal interactions between entities.
Our experiments show that the proposed algorithm outperforms the state of the art baselines for two well-studied benchmarks.
arXiv Detail & Related papers (2020-10-23T03:24:44Z) - Pay Attention to Evolution: Time Series Forecasting with Deep
Graph-Evolution Learning [33.79957892029931]
This work presents a novel neural network architecture for time-series forecasting.
We named our method Recurrent Graph Evolution Neural Network (ReGENN)
An extensive set of experiments was conducted comparing ReGENN with dozens of ensemble methods and classical statistical ones.
arXiv Detail & Related papers (2020-08-28T20:10:07Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.