Long Short-Term Preference Modeling for Continuous-Time Sequential
Recommendation
- URL: http://arxiv.org/abs/2208.00593v1
- Date: Mon, 1 Aug 2022 03:44:55 GMT
- Title: Long Short-Term Preference Modeling for Continuous-Time Sequential
Recommendation
- Authors: Huixuan Chi, Hao Xu, Hao Fu, Mengya Liu, Mengdi Zhang, Yuji Yang,
Qinfen Hao, Wei Wu
- Abstract summary: In real-world scenario, user's short-term preference evolves over time dynamically.
We propose Long Short-Term Preference Modeling for Continuous-Time Sequential Recommendation.
Our memory mechanism can not only store one-hop information, but also trigger with new interactions online.
- Score: 9.965917701831746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling the evolution of user preference is essential in recommender
systems. Recently, dynamic graph-based methods have been studied and achieved
SOTA for recommendation, majority of which focus on user's stable long-term
preference. However, in real-world scenario, user's short-term preference
evolves over time dynamically. Although there exists sequential methods that
attempt to capture it, how to model the evolution of short-term preference with
dynamic graph-based methods has not been well-addressed yet. In particular: 1)
existing methods do not explicitly encode and capture the evolution of
short-term preference as sequential methods do; 2) simply using last few
interactions is not enough for modeling the changing trend. In this paper, we
propose Long Short-Term Preference Modeling for Continuous-Time Sequential
Recommendation (LSTSR) to capture the evolution of short-term preference under
dynamic graph. Specifically, we explicitly encode short-term preference and
optimize it via memory mechanism, which has three key operations: Message,
Aggregate and Update. Our memory mechanism can not only store one-hop
information, but also trigger with new interactions online. Extensive
experiments conducted on five public datasets show that LSTSR consistently
outperforms many state-of-the-art recommendation methods across various lines.
Related papers
- Discrete-event Tensor Factorization: Learning a Smooth Embedding for Continuous Domains [0.0]
This paper analyzes how time can be encoded in factorization-style recommendation models.<n>By including absolute time as a feature, our models can learn varying user preferences and changing item perception over time.
arXiv Detail & Related papers (2025-08-06T08:54:57Z) - Bidirectional Gated Mamba for Sequential Recommendation [56.85338055215429]
Mamba, a recent advancement, has exhibited exceptional performance in time series prediction.
We introduce a new framework named Selective Gated Mamba ( SIGMA) for Sequential Recommendation.
Our results indicate that SIGMA outperforms current models on five real-world datasets.
arXiv Detail & Related papers (2024-08-21T09:12:59Z) - MaTrRec: Uniting Mamba and Transformer for Sequential Recommendation [6.74321828540424]
Sequential recommendation systems aim to provide personalized recommendations by analyzing dynamic preferences and dependencies within user behavior sequences.
Inspired by the State Space Model (SSM)representative model, Mamba, we find that Mamba's recommendation effectiveness is limited in short interaction sequences.
We propose a new model, MaTrRec, which combines the strengths of Mamba and Transformer.
arXiv Detail & Related papers (2024-07-27T12:07:46Z) - SelfGNN: Self-Supervised Graph Neural Networks for Sequential Recommendation [15.977789295203976]
We propose a novel framework called Self-Supervised Graph Neural Network (SelfGNN) for sequential recommendation.
The SelfGNN framework encodes short-term graphs based on time intervals and utilizes Graph Neural Networks (GNNs) to learn short-term collaborative relationships.
Our personalized self-augmented learning structure enhances model robustness by mitigating noise in short-term graphs based on long-term user interests and personal stability.
arXiv Detail & Related papers (2024-05-31T14:53:12Z) - Graph Based Long-Term And Short-Term Interest Model for Click-Through
Rate Prediction [8.679270588565398]
We propose a Graph based Long-term and Short-term interest Model, termed GLSM.
It consists of a multi-interest graph structure for capturing long-term user behavior, a multi-scenario heterogeneous sequence model for modeling short-term information, then an adaptive fusion mechanism to fused information from long-term and short-term behaviors.
arXiv Detail & Related papers (2023-06-05T07:04:34Z) - Multi-Behavior Sequential Recommendation with Temporal Graph Transformer [66.10169268762014]
We tackle the dynamic user-item relation learning with the awareness of multi-behavior interactive patterns.
We propose a new Temporal Graph Transformer (TGT) recommendation framework to jointly capture dynamic short-term and long-range user-item interactive patterns.
arXiv Detail & Related papers (2022-06-06T15:42:54Z) - Modeling Dynamic User Preference via Dictionary Learning for Sequential
Recommendation [133.8758914874593]
Capturing the dynamics in user preference is crucial to better predict user future behaviors because user preferences often drift over time.
Many existing recommendation algorithms -- including both shallow and deep ones -- often model such dynamics independently.
This paper considers the problem of embedding a user's sequential behavior into the latent space of user preferences.
arXiv Detail & Related papers (2022-04-02T03:23:46Z) - PEN4Rec: Preference Evolution Networks for Session-based Recommendation [10.37267170480306]
Session-based recommendation aims to predict user the next action based on historical behaviors in an anonymous session.
For better recommendations, it is vital to capture user preferences as well as their dynamics.
We propose a novel Preference Evolution Networks for session-based Recommendation (PEN4Rec) to model preference evolving process.
arXiv Detail & Related papers (2021-06-17T08:18:52Z) - Dynamic Memory based Attention Network for Sequential Recommendation [79.5901228623551]
We propose a novel long sequential recommendation model called Dynamic Memory-based Attention Network (DMAN)
It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users.
Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation.
arXiv Detail & Related papers (2021-02-18T11:08:54Z) - MEANTIME: Mixture of Attention Mechanisms with Multi-temporal Embeddings
for Sequential Recommendation [12.386304516106854]
Self-attention based models have achieved state-of-the-art performance in sequential recommendation task.
These models rely on a simple positional embedding to exploit the sequential nature of the user's history.
We propose MEANTIME which employs multiple types of temporal embeddings designed to capture various patterns from the user's behavior sequence.
arXiv Detail & Related papers (2020-08-19T05:32:14Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.