Continuous-time convolutions model of event sequences
- URL: http://arxiv.org/abs/2302.06247v1
- Date: Mon, 13 Feb 2023 10:34:51 GMT
- Title: Continuous-time convolutions model of event sequences
- Authors: Vladislav Zhuzhel, Vsevolod Grabar, Galina Boeva, Artem Zabolotnyi,
Alexander Stepikin, Vladimir Zholobov, Maria Ivanova, Mikhail Orlov, Ivan
Kireev, Evgeny Burnaev, Rodrigo Rivera-Castro and Alexey Zaytsev
- Abstract summary: Huge samples of event sequences data occur in various domains, including e-commerce, healthcare, and finance.
The amount of available data and the length of event sequences per client are typically large, thus it requires long-term modelling.
We propose the COTIC method based on a continuous convolution neural network suitable for non-uniform occurrence of events in time.
- Score: 53.36665135225617
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Massive samples of event sequences data occur in various domains, including
e-commerce, healthcare, and finance. There are two main challenges regarding
inference of such data: computational and methodological. The amount of
available data and the length of event sequences per client are typically
large, thus it requires long-term modelling. Moreover, this data is often
sparse and non-uniform, making classic approaches for time series processing
inapplicable. Existing solutions include recurrent and transformer
architectures in such cases. To allow continuous time, the authors introduce
specific parametric intensity functions defined at each moment on top of
existing models. Due to the parametric nature, these intensities represent only
a limited class of event sequences.
We propose the COTIC method based on a continuous convolution neural network
suitable for non-uniform occurrence of events in time. In COTIC, dilations and
multi-layer architecture efficiently handle dependencies between events.
Furthermore, the model provides general intensity dynamics in continuous time -
including self-excitement encountered in practice.
The COTIC model outperforms existing approaches on majority of the considered
datasets, producing embeddings for an event sequence that can be used to solve
downstream tasks - e.g. predicting next event type and return time. The code of
the proposed method can be found in the GitHub repository
(https://github.com/VladislavZh/COTIC).
Related papers
- Sequential-Parallel Duality in Prefix Scannable Models [68.39855814099997]
Recent developments have given rise to various models, such as Gated Linear Attention (GLA) and Mamba.<n>This raises a natural question: can we characterize the full class of neural sequence models that support near-constant-time parallel evaluation and linear-time, constant-space sequential inference?
arXiv Detail & Related papers (2025-06-12T17:32:02Z) - Scalable Event-by-event Processing of Neuromorphic Sensory Signals With Deep State-Space Models [2.551844666707809]
Event-based sensors are well suited for real-time processing.
Current methods either collapse events into frames or cannot scale up when processing the event data directly event-by-event.
arXiv Detail & Related papers (2024-04-29T08:50:27Z) - TimeGraphs: Graph-based Temporal Reasoning [64.18083371645956]
TimeGraphs is a novel approach that characterizes dynamic interactions as a hierarchical temporal graph.
Our approach models the interactions using a compact graph-based representation, enabling adaptive reasoning across diverse time scales.
We evaluate TimeGraphs on multiple datasets with complex, dynamic agent interactions, including a football simulator, the Resistance game, and the MOMA human activity dataset.
arXiv Detail & Related papers (2024-01-06T06:26:49Z) - Flexible Triggering Kernels for Hawkes Process Modeling [11.90725359131405]
Recently proposed encoder-decoder structures for modeling Hawkes processes use transformer-inspired architectures.
We introduce an efficient and general encoding of the historical event sequence by replacing the complex (multilayered) attention structures with triggering kernels.
arXiv Detail & Related papers (2022-02-03T22:02:22Z) - SeDyT: A General Framework for Multi-Step Event Forecasting via Sequence
Modeling on Dynamic Entity Embeddings [6.314274045636102]
Event forecasting is a critical and challenging task in Temporal Knowledge Graph reasoning.
We propose SeDyT, a discriminative framework that performs sequence modeling on the dynamic entity embeddings.
By combining temporal Graph Neural Network models and sequence models, SeDyT achieves an average of 2.4% MRR improvement.
arXiv Detail & Related papers (2021-09-09T20:32:48Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Synergetic Learning of Heterogeneous Temporal Sequences for
Multi-Horizon Probabilistic Forecasting [48.8617204809538]
We propose Variational Synergetic Multi-Horizon Network (VSMHN), a novel deep conditional generative model.
To learn complex correlations across heterogeneous sequences, a tailored encoder is devised to combine the advances in deep point processes models and variational recurrent neural networks.
Our model can be trained effectively using variational inference and generates predictions with Monte-Carlo simulation.
arXiv Detail & Related papers (2021-01-31T11:00:55Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.