Continuous-time convolutions model of event sequences
- URL: http://arxiv.org/abs/2302.06247v1
- Date: Mon, 13 Feb 2023 10:34:51 GMT
- Title: Continuous-time convolutions model of event sequences
- Authors: Vladislav Zhuzhel, Vsevolod Grabar, Galina Boeva, Artem Zabolotnyi,
Alexander Stepikin, Vladimir Zholobov, Maria Ivanova, Mikhail Orlov, Ivan
Kireev, Evgeny Burnaev, Rodrigo Rivera-Castro and Alexey Zaytsev
- Abstract summary: Huge samples of event sequences data occur in various domains, including e-commerce, healthcare, and finance.
The amount of available data and the length of event sequences per client are typically large, thus it requires long-term modelling.
We propose the COTIC method based on a continuous convolution neural network suitable for non-uniform occurrence of events in time.
- Score: 53.36665135225617
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Massive samples of event sequences data occur in various domains, including
e-commerce, healthcare, and finance. There are two main challenges regarding
inference of such data: computational and methodological. The amount of
available data and the length of event sequences per client are typically
large, thus it requires long-term modelling. Moreover, this data is often
sparse and non-uniform, making classic approaches for time series processing
inapplicable. Existing solutions include recurrent and transformer
architectures in such cases. To allow continuous time, the authors introduce
specific parametric intensity functions defined at each moment on top of
existing models. Due to the parametric nature, these intensities represent only
a limited class of event sequences.
We propose the COTIC method based on a continuous convolution neural network
suitable for non-uniform occurrence of events in time. In COTIC, dilations and
multi-layer architecture efficiently handle dependencies between events.
Furthermore, the model provides general intensity dynamics in continuous time -
including self-excitement encountered in practice.
The COTIC model outperforms existing approaches on majority of the considered
datasets, producing embeddings for an event sequence that can be used to solve
downstream tasks - e.g. predicting next event type and return time. The code of
the proposed method can be found in the GitHub repository
(https://github.com/VladislavZh/COTIC).
Related papers
- Scalable Event-by-event Processing of Neuromorphic Sensory Signals With Deep State-Space Models [2.551844666707809]
Event-based sensors are well suited for real-time processing.
Current methods either collapse events into frames or cannot scale up when processing the event data directly event-by-event.
arXiv Detail & Related papers (2024-04-29T08:50:27Z) - Timer: Generative Pre-trained Transformers Are Large Time Series Models [83.03091523806668]
This paper aims at the early development of large time series models (LTSM)
During pre-training, we curate large-scale datasets with up to 1 billion time points.
To meet diverse application needs, we convert forecasting, imputation, and anomaly detection of time series into a unified generative task.
arXiv Detail & Related papers (2024-02-04T06:55:55Z) - XTSFormer: Cross-Temporal-Scale Transformer for Irregular Time Event
Prediction [9.240950990926796]
Event prediction aims to forecast the time and type of a future event based on a historical event sequence.
Despite its significance, several challenges exist, including the irregularity of time intervals between consecutive events, the existence of cycles, periodicity, and multi-scale event interactions.
arXiv Detail & Related papers (2024-02-03T20:33:39Z) - Probabilistic Modeling for Sequences of Sets in Continuous-Time [14.423456635520084]
We develop a general framework for modeling set-valued data in continuous-time.
We also develop inference methods that can use such models to answer probabilistic queries.
arXiv Detail & Related papers (2023-12-22T20:16:10Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Synergetic Learning of Heterogeneous Temporal Sequences for
Multi-Horizon Probabilistic Forecasting [48.8617204809538]
We propose Variational Synergetic Multi-Horizon Network (VSMHN), a novel deep conditional generative model.
To learn complex correlations across heterogeneous sequences, a tailored encoder is devised to combine the advances in deep point processes models and variational recurrent neural networks.
Our model can be trained effectively using variational inference and generates predictions with Monte-Carlo simulation.
arXiv Detail & Related papers (2021-01-31T11:00:55Z) - User-Dependent Neural Sequence Models for Continuous-Time Event Data [27.45413274751265]
Continuous-time event data are common in applications such as individual behavior data, financial transactions, and medical health records.
Recurrent neural networks that parameterize time-varying intensity functions are the current state-of-the-art for predictive modeling with such data.
In this paper, we extend the broad class of neural marked point process models to mixtures of latent embeddings.
arXiv Detail & Related papers (2020-11-06T08:32:57Z) - A Multi-Channel Neural Graphical Event Model with Negative Evidence [76.51278722190607]
Event datasets are sequences of events of various types occurring irregularly over the time-line.
We propose a non-parametric deep neural network approach in order to estimate the underlying intensity functions.
arXiv Detail & Related papers (2020-02-21T23:10:50Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.