Unsupervised Time-Series Representation Learning with Iterative Bilinear
Temporal-Spectral Fusion
- URL: http://arxiv.org/abs/2202.04770v1
- Date: Tue, 8 Feb 2022 14:04:08 GMT
- Title: Unsupervised Time-Series Representation Learning with Iterative Bilinear
Temporal-Spectral Fusion
- Authors: Ling Yang, Shenda Hong, Luxia Zhang
- Abstract summary: We propose a unified framework, namely Bilinear Temporal-Spectral Fusion (BTSF)
Specifically, we utilize the instance-level augmentation with a simple dropout on the entire time series for maximally capturing long-term dependencies.
We devise a novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs.
- Score: 6.154427471704388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised/self-supervised time series representation learning is a
challenging problem because of its complex dynamics and sparse annotations.
Existing works mainly adopt the framework of contrastive learning with the
time-based augmentation techniques to sample positives and negatives for
contrastive training. Nevertheless, they mostly use segment-level augmentation
derived from time slicing, which may bring about sampling bias and incorrect
optimization with false negatives due to the loss of global context. Besides,
they all pay no attention to incorporate the spectral information in feature
representation. In this paper, we propose a unified framework, namely Bilinear
Temporal-Spectral Fusion (BTSF). Specifically, we firstly utilize the
instance-level augmentation with a simple dropout on the entire time series for
maximally capturing long-term dependencies. We devise a novel iterative
bilinear temporal-spectral fusion to explicitly encode the affinities of
abundant time-frequency pairs, and iteratively refines representations in a
fusion-and-squeeze manner with Spectrum-to-Time (S2T) and Time-to-Spectrum
(T2S) Aggregation modules. We firstly conducts downstream evaluations on three
major tasks for time series including classification, forecasting and anomaly
detection. Experimental results shows that our BTSF consistently significantly
outperforms the state-of-the-art methods.
Related papers
- Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Distillation Enhanced Time Series Forecasting Network with Momentum Contrastive Learning [7.4106801792345705]
We propose DE-TSMCL, an innovative distillation enhanced framework for long sequence time series forecasting.
Specifically, we design a learnable data augmentation mechanism which adaptively learns whether to mask a timestamp.
Then, we propose a contrastive learning task with momentum update to explore inter-sample and intra-temporal correlations of time series.
By developing model loss from multiple tasks, we can learn effective representations for downstream forecasting task.
arXiv Detail & Related papers (2024-01-31T12:52:10Z) - Explaining Time Series via Contrastive and Locally Sparse Perturbations [45.055327583283315]
ContraLSP is a sparse model that introduces counterfactual samples to build uninformative perturbations but keeps distribution using contrastive learning.
Empirical studies on both synthetic and real-world datasets show that ContraLSP outperforms state-of-the-art models.
arXiv Detail & Related papers (2024-01-16T18:27:37Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection [53.83593870825628]
One main challenge in time series anomaly detection (TSAD) is the lack of labelled data in many real-life scenarios.
Most of the existing anomaly detection methods focus on learning the normal behaviour of unlabelled time series in an unsupervised manner.
We introduce a novel end-to-end self-supervised ContrAstive Representation Learning approach for time series anomaly detection.
arXiv Detail & Related papers (2023-08-18T04:45:56Z) - Self-Supervised Time Series Representation Learning via Cross
Reconstruction Transformer [11.908755624411707]
Existing approaches mainly leverage the contrastive learning framework, which automatically learns to understand the similar and dissimilar data pairs.
We propose Cross Reconstruction Transformer (CRT) to solve the aforementioned problems in a unified way.
CRT achieves time series representation learning through a cross-domain dropping-reconstruction task.
arXiv Detail & Related papers (2022-05-20T02:15:14Z) - Imputing Missing Observations with Time Sliced Synthetic Minority
Oversampling Technique [0.3973560285628012]
We present a simple yet novel time series imputation technique with the goal of constructing an irregular time series that is uniform across every sample in a data set.
We fix a grid defined by the midpoints of non-overlapping bins (dubbed "slices") of observation times and ensure that each sample has values for all of the features at that given time.
This allows one to both impute fully missing observations to allow uniform time series classification across the entire data and, in special cases, to impute individually missing features.
arXiv Detail & Related papers (2022-01-14T19:23:24Z) - Anomaly Transformer: Time Series Anomaly Detection with Association
Discrepancy [68.86835407617778]
Anomaly Transformer achieves state-of-the-art performance on six unsupervised time series anomaly detection benchmarks.
Anomaly Transformer achieves state-of-the-art performance on six unsupervised time series anomaly detection benchmarks.
arXiv Detail & Related papers (2021-10-06T10:33:55Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.