XTSFormer: Cross-Temporal-Scale Transformer for Irregular Time Event
Prediction
- URL: http://arxiv.org/abs/2402.02258v1
- Date: Sat, 3 Feb 2024 20:33:39 GMT
- Title: XTSFormer: Cross-Temporal-Scale Transformer for Irregular Time Event
Prediction
- Authors: Tingsong Xiao, Zelin Xu, Wenchong He, Jim Su, Yupu Zhang, Raymond
Opoku, Ronald Ison, Jason Petho, Jiang Bian, Patrick Tighe, Parisa Rashidi,
Zhe Jiang
- Abstract summary: Event prediction aims to forecast the time and type of a future event based on a historical event sequence.
Despite its significance, several challenges exist, including the irregularity of time intervals between consecutive events, the existence of cycles, periodicity, and multi-scale event interactions.
- Score: 9.240950990926796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event prediction aims to forecast the time and type of a future event based
on a historical event sequence. Despite its significance, several challenges
exist, including the irregularity of time intervals between consecutive events,
the existence of cycles, periodicity, and multi-scale event interactions, as
well as the high computational costs for long event sequences. Existing neural
temporal point processes (TPPs) methods do not capture the multi-scale nature
of event interactions, which is common in many real-world applications such as
clinical event data. To address these issues, we propose the
cross-temporal-scale transformer (XTSFormer), designed specifically for
irregularly timed event data. Our model comprises two vital components: a novel
Feature-based Cycle-aware Time Positional Encoding (FCPE) that adeptly captures
the cyclical nature of time, and a hierarchical multi-scale temporal attention
mechanism. These scales are determined by a bottom-up clustering algorithm.
Extensive experiments on several real-world datasets show that our XTSFormer
outperforms several baseline methods in prediction performance.
Related papers
- Temporal Cross-Attention for Dynamic Embedding and Tokenization of Multimodal Electronic Health Records [1.6609516435725236]
We introduce a dynamic embedding and tokenization framework for precise representation of multimodal clinical time series.
Our framework outperformed baseline approaches on the task of predicting the occurrence of nine postoperative complications.
arXiv Detail & Related papers (2024-03-06T19:46:44Z) - Sequential Multi-Dimensional Self-Supervised Learning for Clinical Time
Series [3.635056427544418]
We propose a new self-supervised learning method for clinical time series data.
Our method is agnostic to the specific form of loss function used at each level.
We evaluate our method on two real-world clinical datasets.
arXiv Detail & Related papers (2023-07-20T14:49:58Z) - Time Associated Meta Learning for Clinical Prediction [78.99422473394029]
We propose a novel time associated meta learning (TAML) method to make effective predictions at multiple future time points.
To address the sparsity problem after task splitting, TAML employs a temporal information sharing strategy to augment the number of positive samples.
We demonstrate the effectiveness of TAML on multiple clinical datasets, where it consistently outperforms a range of strong baselines.
arXiv Detail & Related papers (2023-03-05T03:54:54Z) - Long-term stable Electromyography classification using Canonical
Correlation Analysis [5.949779668853555]
Discrimination of hand gestures based on surface electromyography (sEMG) signals is a well-establish approach for controlling prosthetic devices.
One of the most critical challenges is maintaining high EMG data classification performance across multiple days without retraining the decoding system.
Here we propose a novel statistical method that stabilizes EMG classification performance across multiple days for long-term control of prosthetic devices.
arXiv Detail & Related papers (2023-01-23T21:45:00Z) - Granger Causal Chain Discovery for Sepsis-Associated Derangements via
Continuous-Time Hawkes Processes [10.410454851418548]
We develop a scalable two-phase gradient-based method to obtain a maximum surrogate-likelihood estimator.
Our method is extended to a data set of patients admitted to Grady hospital system in Atlanta, GA, USA, where the estimated GC graph identifies several highly interpretable GC chains that precede sepsis.
arXiv Detail & Related papers (2022-09-09T18:21:30Z) - Learning Temporal Rules from Noisy Timeseries Data [72.93572292157593]
We focus on uncovering the underlying atomic events and their relations that lead to the composite events within a noisy temporal data setting.
We propose a Neural Temporal Logic Programming (Neural TLP) which first learns implicit temporal relations between atomic events and then lifts logic rules for supervision.
arXiv Detail & Related papers (2022-02-11T01:29:02Z) - Learning Neural Models for Continuous-Time Sequences [0.0]
We study the properties of continuous-time event sequences (CTES) and design robust yet scalable neural network-based models to overcome the aforementioned problems.
In this work, we model the underlying generative distribution of events using marked temporal point processes (MTPP) to address a wide range of real-world problems.
arXiv Detail & Related papers (2021-11-13T20:39:15Z) - Interval-censored Hawkes processes [82.87738318505582]
We propose a model to estimate the parameters of a Hawkes process in interval-censored settings.
We show how a non-homogeneous approximation to the Hawkes admits a tractable likelihood in the interval-censored setting.
arXiv Detail & Related papers (2021-04-16T07:29:04Z) - Multi-Scale One-Class Recurrent Neural Networks for Discrete Event
Sequence Anomaly Detection [63.825781848587376]
We propose OC4Seq, a one-class recurrent neural network for detecting anomalies in discrete event sequences.
Specifically, OC4Seq embeds the discrete event sequences into latent spaces, where anomalies can be easily detected.
arXiv Detail & Related papers (2020-08-31T04:48:22Z) - A Multi-Channel Neural Graphical Event Model with Negative Evidence [76.51278722190607]
Event datasets are sequences of events of various types occurring irregularly over the time-line.
We propose a non-parametric deep neural network approach in order to estimate the underlying intensity functions.
arXiv Detail & Related papers (2020-02-21T23:10:50Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.