Sequential Recommender via Time-aware Attentive Memory Network
- URL: http://arxiv.org/abs/2005.08598v2
- Date: Wed, 18 Nov 2020 05:39:13 GMT
- Title: Sequential Recommender via Time-aware Attentive Memory Network
- Authors: Wendi Ji, Keqiang Wang, Xiaoling Wang, TingWei Chen and Alexandra
Cristea
- Abstract summary: We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
- Score: 67.26862011527986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommendation systems aim to assist users to discover most preferred
contents from an ever-growing corpus of items. Although recommenders have been
greatly improved by deep learning, they still faces several challenges: (1)
Behaviors are much more complex than words in sentences, so traditional
attentive and recurrent models may fail in capturing the temporal dynamics of
user preferences. (2) The preferences of users are multiple and evolving, so it
is difficult to integrate long-term memory and short-term intent.
In this paper, we propose a temporal gating methodology to improve attention
mechanism and recurrent units, so that temporal information can be considered
in both information filtering and state transition. Additionally, we propose a
Multi-hop Time-aware Attentive Memory network (MTAM) to integrate long-term and
short-term preferences. We use the proposed time-aware GRU network to learn the
short-term intent and maintain prior records in user memory. We treat the
short-term intent as a query and design a multi-hop memory reading operation
via the proposed time-aware attention to generate user representation based on
the current intent and long-term memory. Our approach is scalable for candidate
retrieval tasks and can be viewed as a non-linear generalization of latent
factorization for dot-product based Top-K recommendation. Finally, we conduct
extensive experiments on six benchmark datasets and the experimental results
demonstrate the effectiveness of our MTAM and temporal gating methodology.
Related papers
- Multi-granularity Interest Retrieval and Refinement Network for Long-Term User Behavior Modeling in CTR Prediction [68.90783662117936]
Click-through Rate (CTR) prediction is crucial for online personalization platforms.
Recent advancements have shown that modeling rich user behaviors can significantly improve the performance of CTR prediction.
We propose Multi-granularity Interest Retrieval and Refinement Network (MIRRN)
arXiv Detail & Related papers (2024-11-22T15:29:05Z) - LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory [68.97819665784442]
This paper introduces LongMemEval, a benchmark designed to evaluate five core long-term memory abilities of chat assistants.
LongMemEval presents a significant challenge to existing long-term memory systems.
We present a unified framework that breaks down the long-term memory design into four design choices.
arXiv Detail & Related papers (2024-10-14T17:59:44Z) - Sequential Search with Off-Policy Reinforcement Learning [48.88165680363482]
We propose a highly scalable hybrid learning model that consists of an RNN learning framework and an attention model.
As a novel optimization step, we fit multiple short user sequences in a single RNN pass within a training batch, by solving a greedy knapsack problem on the fly.
We also explore the use of off-policy reinforcement learning in multi-session personalized search ranking.
arXiv Detail & Related papers (2022-02-01T06:52:40Z) - Denoising User-aware Memory Network for Recommendation [11.145186013006375]
We propose a novel CTR model named denoising user-aware memory network (DUMN)
DUMN uses the representation of explicit feedback to purify the representation of implicit feedback, and effectively denoise the implicit feedback.
Experiments on two real e-commerce user behavior datasets show that DUMN has a significant improvement over the state-of-the-art baselines.
arXiv Detail & Related papers (2021-07-12T14:39:36Z) - Temporal Memory Relation Network for Workflow Recognition from Surgical
Video [53.20825496640025]
We propose a novel end-to-end temporal memory relation network (TMNet) for relating long-range and multi-scale temporal patterns.
We have extensively validated our approach on two benchmark surgical video datasets.
arXiv Detail & Related papers (2021-03-30T13:20:26Z) - Context-aware short-term interest first model for session-based
recommendation [0.0]
We propose a context-aware short-term interest first model (CASIF)
The aim of this paper is improve the accuracy of recommendations by combining context and short-term interest.
In the end, the short-term and long-term interest are combined as the final interest and multiplied by the candidate vector to obtain the recommendation probability.
arXiv Detail & Related papers (2021-03-29T11:36:00Z) - TLSAN: Time-aware Long- and Short-term Attention Network for Next-item
Recommendation [0.0]
We propose a new Time-aware Long- and Short-term Attention Network (TLSAN)
TLSAN learns user-specific temporal taste via trainable personalized time position embeddings with category-aware correlations in long-term behaviors.
Long- and short-term feature-wise attention layers are proposed to effectively capture users' long- and short-term preferences for accurate recommendation.
arXiv Detail & Related papers (2021-03-16T10:51:57Z) - Dynamic Memory based Attention Network for Sequential Recommendation [79.5901228623551]
We propose a novel long sequential recommendation model called Dynamic Memory-based Attention Network (DMAN)
It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users.
Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation.
arXiv Detail & Related papers (2021-02-18T11:08:54Z) - Modeling Long-Term and Short-Term Interests with Parallel Attentions for
Session-based Recommendation [17.092823992007794]
Session-based recommenders typically explore the users' evolving interests.
Recent advances in attention mechanisms have led to state-of-the-art methods for solving this task.
We propose a novel Parallel Attention Network model (PAN) for Session-based Recommendation.
arXiv Detail & Related papers (2020-06-27T11:47:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.