Retrieving Continuous Time Event Sequences using Neural Temporal Point
Processes with Learnable Hashing
- URL: http://arxiv.org/abs/2307.09613v1
- Date: Thu, 13 Jul 2023 18:54:50 GMT
- Title: Retrieving Continuous Time Event Sequences using Neural Temporal Point
Processes with Learnable Hashing
- Authors: Vinayak Gupta and Srikanta Bedathur and Abir De
- Abstract summary: We propose NeuroSeqRet, a first-of-its-kind framework designed specifically for end-to-end CTES retrieval.
We develop four variants of the relevance model for different kinds of applications based on the trade-off between accuracy and efficiency.
Our experiments show the significant accuracy boost of NeuroSeqRet as well as the efficacy of our hashing mechanism.
- Score: 24.963828650935913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Temporal sequences have become pervasive in various real-world applications.
Consequently, the volume of data generated in the form of continuous time-event
sequence(s) or CTES(s) has increased exponentially in the past few years. Thus,
a significant fraction of the ongoing research on CTES datasets involves
designing models to address downstream tasks such as next-event prediction,
long-term forecasting, sequence classification etc. The recent developments in
predictive modeling using marked temporal point processes (MTPP) have enabled
an accurate characterization of several real-world applications involving the
CTESs. However, due to the complex nature of these CTES datasets, the task of
large-scale retrieval of temporal sequences has been overlooked by the past
literature. In detail, by CTES retrieval we mean that for an input query
sequence, a retrieval system must return a ranked list of relevant sequences
from a large corpus. To tackle this, we propose NeuroSeqRet, a
first-of-its-kind framework designed specifically for end-to-end CTES
retrieval. Specifically, NeuroSeqRet introduces multiple enhancements over
standard retrieval frameworks and first applies a trainable unwarping function
on the query sequence which makes it comparable with corpus sequences,
especially when a relevant query-corpus pair has individually different
attributes. Next, it feeds the unwarped query sequence and the corpus sequence
into MTPP-guided neural relevance models. We develop four variants of the
relevance model for different kinds of applications based on the trade-off
between accuracy and efficiency. We also propose an optimization framework to
learn binary sequence embeddings from the relevance scores, suitable for the
locality-sensitive hashing. Our experiments show the significant accuracy boost
of NeuroSeqRet as well as the efficacy of our hashing mechanism.
Related papers
- Does It Look Sequential? An Analysis of Datasets for Evaluation of Sequential Recommendations [0.8437187555622164]
Sequential recommender systems aim to use the order of interactions in a user's history to predict future interactions.
It is crucial to use datasets that exhibit a sequential structure to evaluate sequential recommenders properly.
We apply several methods based on the random shuffling of the user's sequence of interactions to assess the strength of sequential structure across 15 datasets.
arXiv Detail & Related papers (2024-08-21T21:40:07Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Modeling Time-Series and Spatial Data for Recommendations and Other
Applications [1.713291434132985]
We address the problems that may arise due to the poor quality of CTES data being fed into a recommender system.
To improve the quality of the CTES data, we address a fundamental problem of overcoming missing events in temporal sequences.
We extend their abilities to design solutions for large-scale CTES retrieval and human activity prediction.
arXiv Detail & Related papers (2022-12-25T09:34:15Z) - Towards Out-of-Distribution Sequential Event Prediction: A Causal
Treatment [72.50906475214457]
The goal of sequential event prediction is to estimate the next event based on a sequence of historical events.
In practice, the next-event prediction models are trained with sequential data collected at one time.
We propose a framework with hierarchical branching structures for learning context-specific representations.
arXiv Detail & Related papers (2022-10-24T07:54:13Z) - Learning Sequence Representations by Non-local Recurrent Neural Memory [61.65105481899744]
We propose a Non-local Recurrent Neural Memory (NRNM) for supervised sequence representation learning.
Our model is able to capture long-range dependencies and latent high-level features can be distilled by our model.
Our model compares favorably against other state-of-the-art methods specifically designed for each of these sequence applications.
arXiv Detail & Related papers (2022-07-20T07:26:15Z) - Learning Temporal Point Processes for Efficient Retrieval of Continuous
Time Event Sequences [24.963828650935913]
We propose NEUROSEQRET which learns to retrieve and rank a relevant set of continuous-time event sequences for a given query sequence.
We develop two variants of the relevance model which offer a tradeoff between accuracy and efficiency.
Our experiments with several datasets show the significant accuracy boost of NEUROSEQRET beyond several baselines.
arXiv Detail & Related papers (2022-02-17T11:16:31Z) - Complex Event Forecasting with Prediction Suffix Trees: Extended
Technical Report [70.7321040534471]
Complex Event Recognition (CER) systems have become popular in the past two decades due to their ability to "instantly" detect patterns on real-time streams of events.
There is a lack of methods for forecasting when a pattern might occur before such an occurrence is actually detected by a CER engine.
We present a formal framework that attempts to address the issue of Complex Event Forecasting.
arXiv Detail & Related papers (2021-09-01T09:52:31Z) - Time Series is a Special Sequence: Forecasting with Sample Convolution
and Interaction [9.449017120452675]
Time series is a special type of sequence data, a set of observations collected at even intervals of time and ordered chronologically.
Existing deep learning techniques use generic sequence models for time series analysis, which ignore some of its unique properties.
We propose a novel neural network architecture and apply it for the time series forecasting problem, wherein we conduct sample convolution and interaction at multiple resolutions for temporal modeling.
arXiv Detail & Related papers (2021-06-17T08:15:04Z) - Interpretable Feature Construction for Time Series Extrinsic Regression [0.028675177318965035]
In some application domains, it occurs that the target variable is numerical and the problem is known as time series extrinsic regression (TSER)
We suggest an extension of a Bayesian method for robust and interpretable feature construction and selection in the context of TSER.
Our approach exploits a relational way to tackle with TSER: (i), we build various and simple representations of the time series which are stored in a relational data scheme, then, (ii), a propositionalisation technique is applied to build interpretable features from secondary tables to "flatten" the data.
arXiv Detail & Related papers (2021-03-15T08:12:19Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.