Denoising User-aware Memory Network for Recommendation
- URL: http://arxiv.org/abs/2107.05474v1
- Date: Mon, 12 Jul 2021 14:39:36 GMT
- Title: Denoising User-aware Memory Network for Recommendation
- Authors: Zhi Bian, Shaojun Zhou, Hao Fu, Qihong Yang, Zhenqi Sun, Junjie Tang,
Guiquan Liu, Kaikui Liu, Xiaolong Li
- Abstract summary: We propose a novel CTR model named denoising user-aware memory network (DUMN)
DUMN uses the representation of explicit feedback to purify the representation of implicit feedback, and effectively denoise the implicit feedback.
Experiments on two real e-commerce user behavior datasets show that DUMN has a significant improvement over the state-of-the-art baselines.
- Score: 11.145186013006375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For better user satisfaction and business effectiveness, more and more
attention has been paid to the sequence-based recommendation system, which is
used to infer the evolution of users' dynamic preferences, and recent studies
have noticed that the evolution of users' preferences can be better understood
from the implicit and explicit feedback sequences. However, most of the
existing recommendation techniques do not consider the noise contained in
implicit feedback, which will lead to the biased representation of user
interest and a suboptimal recommendation performance. Meanwhile, the existing
methods utilize item sequence for capturing the evolution of user interest. The
performance of these methods is limited by the length of the sequence, and can
not effectively model the long-term interest in a long period of time. Based on
this observation, we propose a novel CTR model named denoising user-aware
memory network (DUMN). Specifically, the framework: (i) proposes a feature
purification module based on orthogonal mapping, which use the representation
of explicit feedback to purify the representation of implicit feedback, and
effectively denoise the implicit feedback; (ii) designs a user memory network
to model the long-term interests in a fine-grained way by improving the memory
network, which is ignored by the existing methods; and (iii) develops a
preference-aware interactive representation component to fuse the long-term and
short-term interests of users based on gating to understand the evolution of
unbiased preferences of users. Extensive experiments on two real e-commerce
user behavior datasets show that DUMN has a significant improvement over the
state-of-the-art baselines. The code of DUMN model has been uploaded as an
additional material.
Related papers
- Multi-granularity Interest Retrieval and Refinement Network for Long-Term User Behavior Modeling in CTR Prediction [68.90783662117936]
Click-through Rate (CTR) prediction is crucial for online personalization platforms.
Recent advancements have shown that modeling rich user behaviors can significantly improve the performance of CTR prediction.
We propose Multi-granularity Interest Retrieval and Refinement Network (MIRRN)
arXiv Detail & Related papers (2024-11-22T15:29:05Z) - Incorporating Group Prior into Variational Inference for Tail-User Behavior Modeling in CTR Prediction [8.213386595519928]
We propose a novel variational inference approach, namely Group Prior Sampler Variational Inference (GPSVI)
GPSVI introduces group preferences as priors to refine latent user interests for tail users.
Rigorous analysis and extensive experiments demonstrate that GPSVI consistently improves the performance of tail users.
arXiv Detail & Related papers (2024-10-19T13:15:36Z) - MISSRec: Pre-training and Transferring Multi-modal Interest-aware
Sequence Representation for Recommendation [61.45986275328629]
We propose MISSRec, a multi-modal pre-training and transfer learning framework for sequential recommendation.
On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests.
On the candidate item side, we adopt a dynamic fusion module to produce user-adaptive item representation.
arXiv Detail & Related papers (2023-08-22T04:06:56Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Modeling Dynamic User Preference via Dictionary Learning for Sequential
Recommendation [133.8758914874593]
Capturing the dynamics in user preference is crucial to better predict user future behaviors because user preferences often drift over time.
Many existing recommendation algorithms -- including both shallow and deep ones -- often model such dynamics independently.
This paper considers the problem of embedding a user's sequential behavior into the latent space of user preferences.
arXiv Detail & Related papers (2022-04-02T03:23:46Z) - Sequence Adaptation via Reinforcement Learning in Recommender Systems [8.909115457491522]
We propose the SAR model, which learns the sequential patterns and adjusts the sequence length of user-item interactions in a personalized manner.
In addition, we optimize a joint loss function to align the accuracy of the sequential recommendations with the expected cumulative rewards of the critic network.
Our experimental evaluation on four real-world datasets demonstrates the superiority of our proposed model over several baseline approaches.
arXiv Detail & Related papers (2021-07-31T13:56:46Z) - From Implicit to Explicit feedback: A deep neural network for modeling
sequential behaviours and long-short term preferences of online users [3.464871689508835]
Implicit and explicit feedback have different roles for a useful recommendation.
We go from the hypothesis that a user's preference at a time is a combination of long-term and short-term interests.
arXiv Detail & Related papers (2021-07-26T16:59:20Z) - Dynamic Memory based Attention Network for Sequential Recommendation [79.5901228623551]
We propose a novel long sequential recommendation model called Dynamic Memory-based Attention Network (DMAN)
It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users.
Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation.
arXiv Detail & Related papers (2021-02-18T11:08:54Z) - MRIF: Multi-resolution Interest Fusion for Recommendation [0.0]
This paper presents a multi-resolution interest fusion model (MRIF) that takes both properties of users' interests into consideration.
The proposed model is capable to capture the dynamic changes in users' interests at different temporal-ranges, and provides an effective way to combine a group of multi-resolution user interests to make predictions.
arXiv Detail & Related papers (2020-07-08T02:32:15Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.