Dynamic Memory based Attention Network for Sequential Recommendation
- URL: http://arxiv.org/abs/2102.09269v1
- Date: Thu, 18 Feb 2021 11:08:54 GMT
- Title: Dynamic Memory based Attention Network for Sequential Recommendation
- Authors: Qiaoyu Tan, Jianwei Zhang, Ninghao Liu, Xiao Huang, Hongxia Yang,
Jingren Zhou, Xia Hu
- Abstract summary: We propose a novel long sequential recommendation model called Dynamic Memory-based Attention Network (DMAN)
It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users.
Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation.
- Score: 79.5901228623551
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sequential recommendation has become increasingly essential in various online
services. It aims to model the dynamic preferences of users from their
historical interactions and predict their next items. The accumulated user
behavior records on real systems could be very long. This rich data brings
opportunities to track actual interests of users. Prior efforts mainly focus on
making recommendations based on relatively recent behaviors. However, the
overall sequential data may not be effectively utilized, as early interactions
might affect users' current choices. Also, it has become intolerable to scan
the entire behavior sequence when performing inference for each user, since
real-world system requires short response time. To bridge the gap, we propose a
novel long sequential recommendation model, called Dynamic Memory-based
Attention Network (DMAN). It segments the overall long behavior sequence into a
series of sub-sequences, then trains the model and maintains a set of memory
blocks to preserve long-term interests of users. To improve memory fidelity,
DMAN dynamically abstracts each user's long-term interest into its own memory
blocks by minimizing an auxiliary reconstruction loss. Based on the dynamic
memory, the user's short-term and long-term interests can be explicitly
extracted and combined for efficient joint recommendation. Empirical results
over four benchmark datasets demonstrate the superiority of our model in
capturing long-term dependency over various state-of-the-art sequential models.
Related papers
- Multi-granularity Interest Retrieval and Refinement Network for Long-Term User Behavior Modeling in CTR Prediction [68.90783662117936]
Click-through Rate (CTR) prediction is crucial for online personalization platforms.
Recent advancements have shown that modeling rich user behaviors can significantly improve the performance of CTR prediction.
We propose Multi-granularity Interest Retrieval and Refinement Network (MIRRN)
arXiv Detail & Related papers (2024-11-22T15:29:05Z) - Sparse Attentive Memory Network for Click-through Rate Prediction with
Long Sequences [10.233015715433602]
We propose a Sparse Attentive Memory network for long sequential user behavior modeling.
SAM supports efficient training and real-time inference for user behavior sequences with lengths on the scale of thousands.
SAM is successfully deployed on one of the largest international E-commerce platforms.
arXiv Detail & Related papers (2022-08-08T10:11:46Z) - Multi-Behavior Sequential Recommendation with Temporal Graph Transformer [66.10169268762014]
We tackle the dynamic user-item relation learning with the awareness of multi-behavior interactive patterns.
We propose a new Temporal Graph Transformer (TGT) recommendation framework to jointly capture dynamic short-term and long-range user-item interactive patterns.
arXiv Detail & Related papers (2022-06-06T15:42:54Z) - Learning Self-Modulating Attention in Continuous Time Space with
Applications to Sequential Recommendation [102.24108167002252]
We propose a novel attention network, named self-modulating attention, that models the complex and non-linearly evolving dynamic user preferences.
We empirically demonstrate the effectiveness of our method on top-N sequential recommendation tasks, and the results on three large-scale real-world datasets show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-30T03:54:11Z) - Sequential Search with Off-Policy Reinforcement Learning [48.88165680363482]
We propose a highly scalable hybrid learning model that consists of an RNN learning framework and an attention model.
As a novel optimization step, we fit multiple short user sequences in a single RNN pass within a training batch, by solving a greedy knapsack problem on the fly.
We also explore the use of off-policy reinforcement learning in multi-session personalized search ranking.
arXiv Detail & Related papers (2022-02-01T06:52:40Z) - Denoising User-aware Memory Network for Recommendation [11.145186013006375]
We propose a novel CTR model named denoising user-aware memory network (DUMN)
DUMN uses the representation of explicit feedback to purify the representation of implicit feedback, and effectively denoise the implicit feedback.
Experiments on two real e-commerce user behavior datasets show that DUMN has a significant improvement over the state-of-the-art baselines.
arXiv Detail & Related papers (2021-07-12T14:39:36Z) - Dynamic Embeddings for Interaction Prediction [2.5758502140236024]
In recommender systems (RSs), predicting the next item that a user interacts with is critical for user retention.
Recent studies have shown the effectiveness of modeling the mutual interactions between users and items using separate user and item embeddings.
We propose a novel method called DeePRed that addresses some of their limitations.
arXiv Detail & Related papers (2020-11-10T16:04:46Z) - Sequential recommendation with metric models based on frequent sequences [0.688204255655161]
We propose to use frequent sequences to identify the most relevant part of the user history for the recommendation.
The most salient items are then used in a unified metric model that embeds items based on user preferences and sequential dynamics.
arXiv Detail & Related papers (2020-08-12T22:08:04Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.