Attention Mixtures for Time-Aware Sequential Recommendation
- URL: http://arxiv.org/abs/2304.08158v2
- Date: Mon, 3 Jul 2023 08:52:37 GMT
- Title: Attention Mixtures for Time-Aware Sequential Recommendation
- Authors: Viet-Anh Tran and Guillaume Salha-Galvan and Bruno Sguerra and Romain
Hennequin
- Abstract summary: Transformers emerged as powerful methods for sequential recommendation.
We introduce MOJITO, an improved Transformer sequential recommender system.
We demonstrate the relevance of our approach, by empirically outperforming existing Transformers for sequential recommendation on several real-world datasets.
- Score: 10.017195276758454
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformers emerged as powerful methods for sequential recommendation.
However, existing architectures often overlook the complex dependencies between
user preferences and the temporal context. In this short paper, we introduce
MOJITO, an improved Transformer sequential recommender system that addresses
this limitation. MOJITO leverages Gaussian mixtures of attention-based temporal
context and item embedding representations for sequential modeling. Such an
approach permits to accurately predict which items should be recommended next
to users depending on past actions and the temporal context. We demonstrate the
relevance of our approach, by empirically outperforming existing Transformers
for sequential recommendation on several real-world datasets.
Related papers
- Multi-Grained Preference Enhanced Transformer for Multi-Behavior Sequential Recommendation [29.97854124851886]
Sequential recommendation aims to predict the next purchasing item according to users' dynamic preference learned from their historical user-item interactions.
Existing methods only model heterogeneous multi-behavior dependencies at behavior-level or item-level, and modelling interaction-level dependencies is still a challenge.
We propose a Multi-Grained Preference enhanced Transformer framework (M-GPT) to tackle these challenges.
arXiv Detail & Related papers (2024-11-19T02:45:17Z) - Bidirectional Gated Mamba for Sequential Recommendation [56.85338055215429]
Mamba, a recent advancement, has exhibited exceptional performance in time series prediction.
We introduce a new framework named Selective Gated Mamba ( SIGMA) for Sequential Recommendation.
Our results indicate that SIGMA outperforms current models on five real-world datasets.
arXiv Detail & Related papers (2024-08-21T09:12:59Z) - PRformer: Pyramidal Recurrent Transformer for Multivariate Time Series Forecasting [82.03373838627606]
Self-attention mechanism in Transformer architecture requires positional embeddings to encode temporal order in time series prediction.
We argue that this reliance on positional embeddings restricts the Transformer's ability to effectively represent temporal sequences.
We present a model integrating PRE with a standard Transformer encoder, demonstrating state-of-the-art performance on various real-world datasets.
arXiv Detail & Related papers (2024-08-20T01:56:07Z) - Sequential Recommendation on Temporal Proximities with Contrastive
Learning and Self-Attention [3.7182810519704095]
Sequential recommender systems identify user preferences from their past interactions to predict subsequent items optimally.
Recent models often neglect similarities in users' actions that occur implicitly among users during analogous timeframes.
We propose a sequential recommendation model called TemProxRec, which includes contrastive learning and self-attention methods to consider temporal proximities.
arXiv Detail & Related papers (2024-02-15T08:33:16Z) - MISSRec: Pre-training and Transferring Multi-modal Interest-aware
Sequence Representation for Recommendation [61.45986275328629]
We propose MISSRec, a multi-modal pre-training and transfer learning framework for sequential recommendation.
On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests.
On the candidate item side, we adopt a dynamic fusion module to produce user-adaptive item representation.
arXiv Detail & Related papers (2023-08-22T04:06:56Z) - SimpleTron: Eliminating Softmax from Attention Computation [68.8204255655161]
We propose that the dot product pairwise matching attention layer is redundant for the model performance.
We present a simple and fast alternative without any approximation that, to the best of our knowledge, outperforms existing attention approximations on several tasks from the Long-Range Arena benchmark.
arXiv Detail & Related papers (2021-11-23T17:06:01Z) - Continuous-Time Sequential Recommendation with Temporal Graph
Collaborative Transformer [69.0621959845251]
We propose a new framework Temporal Graph Sequential Recommender (TGSRec) upon our defined continuous-time bi-partite graph.
TCT layer can simultaneously capture collaborative signals from both users and items, as well as considering temporal dynamics inside sequential patterns.
Empirical results on five datasets show that TGSRec significantly outperforms other baselines.
arXiv Detail & Related papers (2021-08-14T22:50:53Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - MEANTIME: Mixture of Attention Mechanisms with Multi-temporal Embeddings
for Sequential Recommendation [12.386304516106854]
Self-attention based models have achieved state-of-the-art performance in sequential recommendation task.
These models rely on a simple positional embedding to exploit the sequential nature of the user's history.
We propose MEANTIME which employs multiple types of temporal embeddings designed to capture various patterns from the user's behavior sequence.
arXiv Detail & Related papers (2020-08-19T05:32:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.