Frequency-Masked Embedding Inference: A Non-Contrastive Approach for Time Series Representation Learning
- URL: http://arxiv.org/abs/2412.20790v2
- Date: Mon, 06 Jan 2025 12:17:43 GMT
- Title: Frequency-Masked Embedding Inference: A Non-Contrastive Approach for Time Series Representation Learning
- Authors: En Fu, Yanyan Hu,
- Abstract summary: This paper introduces Frequency-masked Embedding Inference (FEI), a novel non-contrastive method that completely eliminates the need for positive and negative samples.
FEI significantly outperforms existing contrastive-based methods in terms of generalization.
This study provides new insights into self-supervised representation learning for time series.
- Score: 0.38366697175402226
- License:
- Abstract: Contrastive learning underpins most current self-supervised time series representation methods. The strategy for constructing positive and negative sample pairs significantly affects the final representation quality. However, due to the continuous nature of time series semantics, the modeling approach of contrastive learning struggles to accommodate the characteristics of time series data. This results in issues such as difficulties in constructing hard negative samples and the potential introduction of inappropriate biases during positive sample construction. Although some recent works have developed several scientific strategies for constructing positive and negative sample pairs with improved effectiveness, they remain constrained by the contrastive learning framework. To fundamentally overcome the limitations of contrastive learning, this paper introduces Frequency-masked Embedding Inference (FEI), a novel non-contrastive method that completely eliminates the need for positive and negative samples. The proposed FEI constructs 2 inference branches based on a prompting strategy: 1) Using frequency masking as prompts to infer the embedding representation of the target series with missing frequency bands in the embedding space, and 2) Using the target series as prompts to infer its frequency masking embedding. In this way, FEI enables continuous semantic relationship modeling for time series. Experiments on 8 widely used time series datasets for classification and regression tasks, using linear evaluation and end-to-end fine-tuning, show that FEI significantly outperforms existing contrastive-based methods in terms of generalization. This study provides new insights into self-supervised representation learning for time series. The code is available at https://github.com/USTBInnovationPark/Frequency-masked-Embedding-Inference.
Related papers
- Enhancing Foundation Models for Time Series Forecasting via Wavelet-based Tokenization [74.3339999119713]
We develop a wavelet-based tokenizer that allows models to learn complex representations directly in the space of time-localized frequencies.
Our method first scales and decomposes the input time series, then thresholds and quantizes the wavelet coefficients, and finally pre-trains an autoregressive model to forecast coefficients for the forecast horizon.
arXiv Detail & Related papers (2024-12-06T18:22:59Z) - Dynamic Contrastive Learning for Time Series Representation [6.086030037869592]
We propose DynaCL, an unsupervised contrastive representation learning framework for time series.
We demonstrate that DynaCL embeds instances from time series into semantically meaningful clusters.
Our findings also reveal that high scores on unsupervised clustering metrics do not guarantee that the representations are useful in downstream tasks.
arXiv Detail & Related papers (2024-10-20T15:20:24Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - TimesURL: Self-supervised Contrastive Learning for Universal Time Series
Representation Learning [31.458689807334228]
We propose a novel self-supervised framework named TimesURL to tackle time series representation.
Specifically, we first introduce a frequency-temporal-based augmentation to keep the temporal property unchanged.
We also construct double Universums as a special kind of hard negative to guide better contrastive learning.
arXiv Detail & Related papers (2023-12-25T12:23:26Z) - CLeaRForecast: Contrastive Learning of High-Purity Representations for
Time Series Forecasting [2.5816901096123863]
Time series forecasting (TSF) holds significant importance in modern society, spanning numerous domains.
Previous representation learning-based TSF algorithms typically embrace a contrastive learning paradigm featuring segregated trend-periodicity representations.
We propose CLeaRForecast, a novel contrastive learning framework to learn high-purity time series representations with proposed sample, feature, and architecture purifying methods.
arXiv Detail & Related papers (2023-12-10T04:37:43Z) - REBAR: Retrieval-Based Reconstruction for Time-series Contrastive Learning [64.08293076551601]
We propose a novel method of using a learned measure for identifying positive pairs.
Our Retrieval-Based Reconstruction measure measures the similarity between two sequences.
We show that the REBAR error is a predictor of mutual class membership.
arXiv Detail & Related papers (2023-11-01T13:44:45Z) - CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection [53.83593870825628]
One main challenge in time series anomaly detection (TSAD) is the lack of labelled data in many real-life scenarios.
Most of the existing anomaly detection methods focus on learning the normal behaviour of unlabelled time series in an unsupervised manner.
We introduce a novel end-to-end self-supervised ContrAstive Representation Learning approach for time series anomaly detection.
arXiv Detail & Related papers (2023-08-18T04:45:56Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - Multi-Task Self-Supervised Time-Series Representation Learning [3.31490164885582]
Time-series representation learning can extract representations from data with temporal dynamics and sparse labels.
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks.
We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2023-03-02T07:44:06Z) - Unsupervised Time-Series Representation Learning with Iterative Bilinear
Temporal-Spectral Fusion [6.154427471704388]
We propose a unified framework, namely Bilinear Temporal-Spectral Fusion (BTSF)
Specifically, we utilize the instance-level augmentation with a simple dropout on the entire time series for maximally capturing long-term dependencies.
We devise a novel iterative bilinear temporal-spectral fusion to explicitly encode the affinities of abundant time-frequency pairs.
arXiv Detail & Related papers (2022-02-08T14:04:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.