EMIT- Event-Based Masked Auto Encoding for Irregular Time Series
- URL: http://arxiv.org/abs/2409.16554v2
- Date: Tue, 15 Oct 2024 03:10:31 GMT
- Title: EMIT- Event-Based Masked Auto Encoding for Irregular Time Series
- Authors: Hrishikesh Patel, Ruihong Qiu, Adam Irwin, Shazia Sadiq, Sen Wang,
- Abstract summary: Irregular time series, where data points are recorded at uneven intervals, are prevalent in healthcare settings.
This variability, which reflects critical fluctuations in patient health, is essential for informed clinical decision-making.
Existing self-supervised learning research on irregular time series often relies on generic pretext tasks like forecasting.
This paper proposes a novel pretraining framework, EMIT, an event-based masking for irregular time series.
- Score: 9.903108445512576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Irregular time series, where data points are recorded at uneven intervals, are prevalent in healthcare settings, such as emergency wards where vital signs and laboratory results are captured at varying times. This variability, which reflects critical fluctuations in patient health, is essential for informed clinical decision-making. Existing self-supervised learning research on irregular time series often relies on generic pretext tasks like forecasting, which may not fully utilise the signal provided by irregular time series. There is a significant need for specialised pretext tasks designed for the characteristics of irregular time series to enhance model performance and robustness, especially in scenarios with limited data availability. This paper proposes a novel pretraining framework, EMIT, an event-based masking for irregular time series. EMIT focuses on masking-based reconstruction in the latent space, selecting masking points based on the rate of change in the data. This method preserves the natural variability and timing of measurements while enhancing the model's ability to process irregular intervals without losing essential information. Extensive experiments on the MIMIC-III and PhysioNet Challenge datasets demonstrate the superior performance of our event-based masking strategy. The code has been released at https://github.com/hrishi-ds/EMIT.
Related papers
- Scalable Numerical Embeddings for Multivariate Time Series: Enhancing Healthcare Data Representation Learning [6.635084843592727]
We propose SCAlable Numerical Embedding (SCANE), a novel framework that treats each feature value as an independent token.
SCANE regularizes the traits of distinct feature embeddings and enhances representational learning through a scalable embedding mechanism.
We develop the nUMerical eMbeddIng Transformer (SUMMIT), which is engineered to deliver precise predictive outputs for MTS characterized by prevalent missing entries.
arXiv Detail & Related papers (2024-05-26T13:06:45Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - XTSFormer: Cross-Temporal-Scale Transformer for Irregular Time Event
Prediction [9.240950990926796]
Event prediction aims to forecast the time and type of a future event based on a historical event sequence.
Despite its significance, several challenges exist, including the irregularity of time intervals between consecutive events, the existence of cycles, periodicity, and multi-scale event interactions.
arXiv Detail & Related papers (2024-02-03T20:33:39Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Time Series as Images: Vision Transformer for Irregularly Sampled Time
Series [32.99466250557855]
This paper introduces a novel perspective by converting irregularly sampled time series into line graph images.
We then utilize powerful pre-trained vision transformers for time series classification in the same way as image classification.
Remarkably, despite its simplicity, our approach outperforms state-of-the-art specialized algorithms on several popular healthcare and human activity datasets.
arXiv Detail & Related papers (2023-03-01T22:42:44Z) - SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling [82.69579113377192]
SimMTM is a simple pre-training framework for Masked Time-series Modeling.
SimMTM recovers masked time points by the weighted aggregation of multiple neighbors outside the manifold.
SimMTM achieves state-of-the-art fine-tuning performance compared to the most advanced time series pre-training methods.
arXiv Detail & Related papers (2023-02-02T04:12:29Z) - Ti-MAE: Self-Supervised Masked Time Series Autoencoders [16.98069693152999]
We propose a novel framework named Ti-MAE, in which the input time series are assumed to follow an integrate distribution.
Ti-MAE randomly masks out embedded time series data and learns an autoencoder to reconstruct them at the point-level.
Experiments on several public real-world datasets demonstrate that our framework of masked autoencoding could learn strong representations directly from the raw data.
arXiv Detail & Related papers (2023-01-21T03:20:23Z) - Self-supervised Transformer for Multivariate Clinical Time-Series with
Missing Values [7.9405251142099464]
We present STraTS (Self-supervised Transformer for TimeSeries) model.
It treats time-series as a set of observation triplets instead of using the traditional dense matrix representation.
It shows better prediction performance than state-of-theart methods for mortality prediction, especially when labeled data is limited.
arXiv Detail & Related papers (2021-07-29T19:39:39Z) - Explaining Time Series Predictions with Dynamic Masks [91.3755431537592]
We propose dynamic masks (Dynamask) to explain predictions of a machine learning model.
With synthetic and real-world data, we demonstrate that the dynamic underpinning of Dynamask, together with its parsimony, offer a neat improvement in the identification of feature importance over time.
The modularity of Dynamask makes it ideal as a plug-in to increase the transparency of a wide range of machine learning models in areas such as medicine and finance.
arXiv Detail & Related papers (2021-06-09T18:01:09Z) - Model-Attentive Ensemble Learning for Sequence Modeling [86.4785354333566]
We present Model-Attentive Ensemble learning for Sequence modeling (MAES)
MAES is a mixture of time-series experts which leverages an attention-based gating mechanism to specialize the experts on different sequence dynamics and adaptively weight their predictions.
We demonstrate that MAES significantly out-performs popular sequence models on datasets subject to temporal shift.
arXiv Detail & Related papers (2021-02-23T05:23:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.