TLSAN: Time-aware Long- and Short-term Attention Network for Next-item
Recommendation
- URL: http://arxiv.org/abs/2103.08971v1
- Date: Tue, 16 Mar 2021 10:51:57 GMT
- Title: TLSAN: Time-aware Long- and Short-term Attention Network for Next-item
Recommendation
- Authors: Jianqing Zhang (1), Dongjing Wang (1), Dongjin Yu (1) ((1) School of
Computer Science and Technology, Hangzhou Dianzi University, China)
- Abstract summary: We propose a new Time-aware Long- and Short-term Attention Network (TLSAN)
TLSAN learns user-specific temporal taste via trainable personalized time position embeddings with category-aware correlations in long-term behaviors.
Long- and short-term feature-wise attention layers are proposed to effectively capture users' long- and short-term preferences for accurate recommendation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, deep neural networks are widely applied in recommender systems for
their effectiveness in capturing/modeling users' preferences. Especially, the
attention mechanism in deep learning enables recommender systems to incorporate
various features in an adaptive way. Specifically, as for the next item
recommendation task, we have the following three observations: 1) users'
sequential behavior records aggregate at time positions ("time-aggregation"),
2) users have personalized taste that is related to the "time-aggregation"
phenomenon ("personalized time-aggregation"), and 3) users' short-term
interests play an important role in the next item prediction/recommendation. In
this paper, we propose a new Time-aware Long- and Short-term Attention Network
(TLSAN) to address those observations mentioned above. Specifically, TLSAN
consists of two main components. Firstly, TLSAN models "personalized
time-aggregation" and learn user-specific temporal taste via trainable
personalized time position embeddings with category-aware correlations in
long-term behaviors. Secondly, long- and short-term feature-wise attention
layers are proposed to effectively capture users' long- and short-term
preferences for accurate recommendation. Especially, the attention mechanism
enables TLSAN to utilize users' preferences in an adaptive way, and its usage
in long- and short-term layers enhances TLSAN's ability of dealing with sparse
interaction data. Extensive experiments are conducted on Amazon datasets from
different fields (also with different size), and the results show that TLSAN
outperforms state-of-the-art baselines in both capturing users' preferences and
performing time-sensitive next-item recommendation.
Related papers
- SA-LSPL:Sequence-Aware Long- and Short- Term Preference Learning for next POI recommendation [19.40796508546581]
Point of Interest (POI) recommendation aims to recommend the POI for users at a specific time.
We propose a novel approach called Sequence-Aware Long- and Short-Term Preference Learning (SA-LSPL) for next-POI recommendation.
arXiv Detail & Related papers (2024-03-30T13:40:25Z) - Multi-Behavior Sequential Recommendation with Temporal Graph Transformer [66.10169268762014]
We tackle the dynamic user-item relation learning with the awareness of multi-behavior interactive patterns.
We propose a new Temporal Graph Transformer (TGT) recommendation framework to jointly capture dynamic short-term and long-range user-item interactive patterns.
arXiv Detail & Related papers (2022-06-06T15:42:54Z) - Sequential Search with Off-Policy Reinforcement Learning [48.88165680363482]
We propose a highly scalable hybrid learning model that consists of an RNN learning framework and an attention model.
As a novel optimization step, we fit multiple short user sequences in a single RNN pass within a training batch, by solving a greedy knapsack problem on the fly.
We also explore the use of off-policy reinforcement learning in multi-session personalized search ranking.
arXiv Detail & Related papers (2022-02-01T06:52:40Z) - Learning Dual Dynamic Representations on Time-Sliced User-Item
Interaction Graphs for Sequential Recommendation [62.30552176649873]
We devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe)
To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice.
To enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices.
arXiv Detail & Related papers (2021-09-24T07:44:27Z) - From Implicit to Explicit feedback: A deep neural network for modeling
sequential behaviours and long-short term preferences of online users [3.464871689508835]
Implicit and explicit feedback have different roles for a useful recommendation.
We go from the hypothesis that a user's preference at a time is a combination of long-term and short-term interests.
arXiv Detail & Related papers (2021-07-26T16:59:20Z) - Context-aware short-term interest first model for session-based
recommendation [0.0]
We propose a context-aware short-term interest first model (CASIF)
The aim of this paper is improve the accuracy of recommendations by combining context and short-term interest.
In the end, the short-term and long-term interest are combined as the final interest and multiplied by the candidate vector to obtain the recommendation probability.
arXiv Detail & Related papers (2021-03-29T11:36:00Z) - Dynamic Memory based Attention Network for Sequential Recommendation [79.5901228623551]
We propose a novel long sequential recommendation model called Dynamic Memory-based Attention Network (DMAN)
It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users.
Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation.
arXiv Detail & Related papers (2021-02-18T11:08:54Z) - Multi-Interactive Attention Network for Fine-grained Feature Learning in
CTR Prediction [48.267995749975476]
In the Click-Through Rate (CTR) prediction scenario, user's sequential behaviors are well utilized to capture the user interest.
Existing methods mostly utilize attention on the behavior of users, which is not always suitable for CTR prediction.
We propose a Multi-Interactive Attention Network (MIAN) to comprehensively extract the latent relationship among all kinds of fine-grained features.
arXiv Detail & Related papers (2020-12-13T05:46:19Z) - Modeling Long-Term and Short-Term Interests with Parallel Attentions for
Session-based Recommendation [17.092823992007794]
Session-based recommenders typically explore the users' evolving interests.
Recent advances in attention mechanisms have led to state-of-the-art methods for solving this task.
We propose a novel Parallel Attention Network model (PAN) for Session-based Recommendation.
arXiv Detail & Related papers (2020-06-27T11:47:51Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.