Learning Temporal Point Processes for Efficient Retrieval of Continuous
Time Event Sequences
- URL: http://arxiv.org/abs/2202.11485v1
- Date: Thu, 17 Feb 2022 11:16:31 GMT
- Title: Learning Temporal Point Processes for Efficient Retrieval of Continuous
Time Event Sequences
- Authors: Vinayak Gupta and Srikanta Bedathur and Abir De
- Abstract summary: We propose NEUROSEQRET which learns to retrieve and rank a relevant set of continuous-time event sequences for a given query sequence.
We develop two variants of the relevance model which offer a tradeoff between accuracy and efficiency.
Our experiments with several datasets show the significant accuracy boost of NEUROSEQRET beyond several baselines.
- Score: 24.963828650935913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent developments in predictive modeling using marked temporal point
processes (MTPP) have enabled an accurate characterization of several
real-world applications involving continuous-time event sequences (CTESs).
However, the retrieval problem of such sequences remains largely unaddressed in
literature. To tackle this, we propose NEUROSEQRET which learns to retrieve and
rank a relevant set of continuous-time event sequences for a given query
sequence, from a large corpus of sequences. More specifically, NEUROSEQRET
first applies a trainable unwarping function on the query sequence, which makes
it comparable with corpus sequences, especially when a relevant query-corpus
pair has individually different attributes. Next, it feeds the unwarped query
sequence and the corpus sequence into MTPP guided neural relevance models. We
develop two variants of the relevance model which offer a tradeoff between
accuracy and efficiency. We also propose an optimization framework to learn
binary sequence embeddings from the relevance scores, suitable for the
locality-sensitive hashing leading to a significant speedup in returning top-K
results for a given query sequence. Our experiments with several datasets show
the significant accuracy boost of NEUROSEQRET beyond several baselines, as well
as the efficacy of our hashing mechanism.
Related papers
- Long-Sequence Recommendation Models Need Decoupled Embeddings [49.410906935283585]
We identify and characterize a neglected deficiency in existing long-sequence recommendation models.
A single set of embeddings struggles with learning both attention and representation, leading to interference between these two processes.
We propose the Decoupled Attention and Representation Embeddings (DARE) model, where two distinct embedding tables are learned separately to fully decouple attention and representation.
arXiv Detail & Related papers (2024-10-03T15:45:15Z) - Does It Look Sequential? An Analysis of Datasets for Evaluation of Sequential Recommendations [0.8437187555622164]
Sequential recommender systems aim to use the order of interactions in a user's history to predict future interactions.
It is crucial to use datasets that exhibit a sequential structure to evaluate sequential recommenders properly.
We apply several methods based on the random shuffling of the user's sequence of interactions to assess the strength of sequential structure across 15 datasets.
arXiv Detail & Related papers (2024-08-21T21:40:07Z) - On the Sequence Evaluation based on Stochastic Processes [17.497842325320825]
We propose a novel approach to learn the dynamics of long text sequences, utilizing a negative log-likelihood-based encoder.
We also introduce a likelihood-based evaluation metric for long-text assessment, which measures sequence coherence.
arXiv Detail & Related papers (2024-05-28T02:33:38Z) - Activity Grammars for Temporal Action Segmentation [71.03141719666972]
temporal action segmentation aims at translating an untrimmed activity video into a sequence of action segments.
This paper introduces an effective activity grammar to guide neural predictions for temporal action segmentation.
Experimental results demonstrate that our method significantly improves temporal action segmentation in terms of both performance and interpretability.
arXiv Detail & Related papers (2023-12-07T12:45:33Z) - Retrieving Continuous Time Event Sequences using Neural Temporal Point
Processes with Learnable Hashing [24.963828650935913]
We propose NeuroSeqRet, a first-of-its-kind framework designed specifically for end-to-end CTES retrieval.
We develop four variants of the relevance model for different kinds of applications based on the trade-off between accuracy and efficiency.
Our experiments show the significant accuracy boost of NeuroSeqRet as well as the efficacy of our hashing mechanism.
arXiv Detail & Related papers (2023-07-13T18:54:50Z) - Mutual Exclusivity Training and Primitive Augmentation to Induce
Compositionality [84.94877848357896]
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models.
We analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias and the tendency to memorize whole examples.
We show substantial empirical improvements using standard sequence-to-sequence models on two widely-used compositionality datasets.
arXiv Detail & Related papers (2022-11-28T17:36:41Z) - Learning Sequence Representations by Non-local Recurrent Neural Memory [61.65105481899744]
We propose a Non-local Recurrent Neural Memory (NRNM) for supervised sequence representation learning.
Our model is able to capture long-range dependencies and latent high-level features can be distilled by our model.
Our model compares favorably against other state-of-the-art methods specifically designed for each of these sequence applications.
arXiv Detail & Related papers (2022-07-20T07:26:15Z) - Time Series is a Special Sequence: Forecasting with Sample Convolution
and Interaction [9.449017120452675]
Time series is a special type of sequence data, a set of observations collected at even intervals of time and ordered chronologically.
Existing deep learning techniques use generic sequence models for time series analysis, which ignore some of its unique properties.
We propose a novel neural network architecture and apply it for the time series forecasting problem, wherein we conduct sample convolution and interaction at multiple resolutions for temporal modeling.
arXiv Detail & Related papers (2021-06-17T08:15:04Z) - Interpretable Feature Construction for Time Series Extrinsic Regression [0.028675177318965035]
In some application domains, it occurs that the target variable is numerical and the problem is known as time series extrinsic regression (TSER)
We suggest an extension of a Bayesian method for robust and interpretable feature construction and selection in the context of TSER.
Our approach exploits a relational way to tackle with TSER: (i), we build various and simple representations of the time series which are stored in a relational data scheme, then, (ii), a propositionalisation technique is applied to build interpretable features from secondary tables to "flatten" the data.
arXiv Detail & Related papers (2021-03-15T08:12:19Z) - Tensor Representations for Action Recognition [54.710267354274194]
Human actions in sequences are characterized by the complex interplay between spatial features and their temporal dynamics.
We propose novel tensor representations for capturing higher-order relationships between visual features for the task of action recognition.
We use higher-order tensors and so-called Eigenvalue Power Normalization (NEP) which have been long speculated to perform spectral detection of higher-order occurrences.
arXiv Detail & Related papers (2020-12-28T17:27:18Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.