IDNP: Interest Dynamics Modeling using Generative Neural Processes for
Sequential Recommendation
- URL: http://arxiv.org/abs/2208.04600v1
- Date: Tue, 9 Aug 2022 08:33:32 GMT
- Title: IDNP: Interest Dynamics Modeling using Generative Neural Processes for
Sequential Recommendation
- Authors: Jing Du, Zesheng Ye, Lina Yao, Bin Guo, Zhiwen Yu
- Abstract summary: We present an textbfInterest textbfDynamics modeling framework using generative textbfNeural textbfProcesses, coined IDNP, to model user interests from a functional perspective.
Our model outperforms state-of-the-arts on various evaluation metrics.
- Score: 40.4445022666304
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent sequential recommendation models rely increasingly on consecutive
short-term user-item interaction sequences to model user interests. These
approaches have, however, raised concerns about both short- and long-term
interests. (1) {\it short-term}: interaction sequences may not result from a
monolithic interest, but rather from several intertwined interests, even within
a short period of time, resulting in their failures to model skip behaviors;
(2) {\it long-term}: interaction sequences are primarily observed sparsely at
discrete intervals, other than consecutively over the long run. This renders
difficulty in inferring long-term interests, since only discrete interest
representations can be derived, without taking into account interest dynamics
across sequences. In this study, we address these concerns by learning (1)
multi-scale representations of short-term interests; and (2) dynamics-aware
representations of long-term interests. To this end, we present an
\textbf{I}nterest \textbf{D}ynamics modeling framework using generative
\textbf{N}eural \textbf{P}rocesses, coined IDNP, to model user interests from a
functional perspective. IDNP learns a global interest function family to define
each user's long-term interest as a function instantiation, manifesting
interest dynamics through function continuity. Specifically, IDNP first encodes
each user's short-term interactions into multi-scale representations, which are
then summarized as user context. By combining latent global interest with user
context, IDNP then reconstructs long-term user interest functions and predicts
interactions at upcoming query timestep. Moreover, IDNP can model such interest
functions even when interaction sequences are limited and non-consecutive.
Extensive experiments on four real-world datasets demonstrate that our model
outperforms state-of-the-arts on various evaluation metrics.
Related papers
- Multi-granularity Interest Retrieval and Refinement Network for Long-Term User Behavior Modeling in CTR Prediction [68.90783662117936]
Click-through Rate (CTR) prediction is crucial for online personalization platforms.
Recent advancements have shown that modeling rich user behaviors can significantly improve the performance of CTR prediction.
We propose Multi-granularity Interest Retrieval and Refinement Network (MIRRN)
arXiv Detail & Related papers (2024-11-22T15:29:05Z) - LLM-assisted Explicit and Implicit Multi-interest Learning Framework for Sequential Recommendation [50.98046887582194]
We propose an explicit and implicit multi-interest learning framework to model user interests on two levels: behavior and semantics.
The proposed EIMF framework effectively and efficiently combines small models with LLM to improve the accuracy of multi-interest modeling.
arXiv Detail & Related papers (2024-11-14T13:00:23Z) - Denoising Long- and Short-term Interests for Sequential Recommendation [11.830033570949944]
We propose a Long- and Short-term Interest Denoising Network (LSIDN)
We employ a session-level interest extraction and evolution strategy to avoid introducing inter-session behavioral noise into long-term interest modeling.
Results of experiments on two public datasets show that LSIDN consistently outperforms state-of-the-art models.
arXiv Detail & Related papers (2024-07-20T03:52:14Z) - Learning Sequence Representations by Non-local Recurrent Neural Memory [61.65105481899744]
We propose a Non-local Recurrent Neural Memory (NRNM) for supervised sequence representation learning.
Our model is able to capture long-range dependencies and latent high-level features can be distilled by our model.
Our model compares favorably against other state-of-the-art methods specifically designed for each of these sequence applications.
arXiv Detail & Related papers (2022-07-20T07:26:15Z) - Learning Dual Dynamic Representations on Time-Sliced User-Item
Interaction Graphs for Sequential Recommendation [62.30552176649873]
We devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe)
To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice.
To enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices.
arXiv Detail & Related papers (2021-09-24T07:44:27Z) - Context-aware short-term interest first model for session-based
recommendation [0.0]
We propose a context-aware short-term interest first model (CASIF)
The aim of this paper is improve the accuracy of recommendations by combining context and short-term interest.
In the end, the short-term and long-term interest are combined as the final interest and multiplied by the candidate vector to obtain the recommendation probability.
arXiv Detail & Related papers (2021-03-29T11:36:00Z) - Dynamic Memory based Attention Network for Sequential Recommendation [79.5901228623551]
We propose a novel long sequential recommendation model called Dynamic Memory-based Attention Network (DMAN)
It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users.
Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation.
arXiv Detail & Related papers (2021-02-18T11:08:54Z) - MRIF: Multi-resolution Interest Fusion for Recommendation [0.0]
This paper presents a multi-resolution interest fusion model (MRIF) that takes both properties of users' interests into consideration.
The proposed model is capable to capture the dynamic changes in users' interests at different temporal-ranges, and provides an effective way to combine a group of multi-resolution user interests to make predictions.
arXiv Detail & Related papers (2020-07-08T02:32:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.