From Implicit to Explicit feedback: A deep neural network for modeling
sequential behaviours and long-short term preferences of online users
- URL: http://arxiv.org/abs/2107.12325v1
- Date: Mon, 26 Jul 2021 16:59:20 GMT
- Title: From Implicit to Explicit feedback: A deep neural network for modeling
sequential behaviours and long-short term preferences of online users
- Authors: Quyen Tran, Lam Tran, Linh Chu Hai, Linh Ngo Van, Khoat Than
- Abstract summary: Implicit and explicit feedback have different roles for a useful recommendation.
We go from the hypothesis that a user's preference at a time is a combination of long-term and short-term interests.
- Score: 3.464871689508835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we examine the advantages of using multiple types of behaviour
in recommendation systems. Intuitively, each user has to do some implicit
actions (e.g., click) before making an explicit decision (e.g., purchase).
Previous studies showed that implicit and explicit feedback have different
roles for a useful recommendation. However, these studies either exploit
implicit and explicit behaviour separately or ignore the semantic of sequential
interactions between users and items. In addition, we go from the hypothesis
that a user's preference at a time is a combination of long-term and short-term
interests. In this paper, we propose some Deep Learning architectures. The
first one is Implicit to Explicit (ITE), to exploit users' interests through
the sequence of their actions. And two versions of ITE with Bidirectional
Encoder Representations from Transformers based (BERT-based) architecture
called BERT-ITE and BERT-ITE-Si, which combine users' long- and short-term
preferences without and with side information to enhance user representation.
The experimental results show that our models outperform previous
state-of-the-art ones and also demonstrate our views on the effectiveness of
exploiting the implicit to explicit order as well as combining long- and
short-term preferences in two large-scale datasets.
Related papers
- Our Model Achieves Excellent Performance on MovieLens: What Does it Mean? [43.3971105361606]
We conduct a meticulous analysis of the MovieLens dataset.
There are significant differences in user interactions at the different stages when a user interacts with the MovieLens platform.
We discuss the discrepancy between the interaction generation mechanism that is employed by the MovieLens system and that of typical real-world recommendation scenarios.
arXiv Detail & Related papers (2023-07-19T13:44:32Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Multi-Behavior Sequential Recommendation with Temporal Graph Transformer [66.10169268762014]
We tackle the dynamic user-item relation learning with the awareness of multi-behavior interactive patterns.
We propose a new Temporal Graph Transformer (TGT) recommendation framework to jointly capture dynamic short-term and long-range user-item interactive patterns.
arXiv Detail & Related papers (2022-06-06T15:42:54Z) - Modeling Dynamic User Preference via Dictionary Learning for Sequential
Recommendation [133.8758914874593]
Capturing the dynamics in user preference is crucial to better predict user future behaviors because user preferences often drift over time.
Many existing recommendation algorithms -- including both shallow and deep ones -- often model such dynamics independently.
This paper considers the problem of embedding a user's sequential behavior into the latent space of user preferences.
arXiv Detail & Related papers (2022-04-02T03:23:46Z) - Denoising User-aware Memory Network for Recommendation [11.145186013006375]
We propose a novel CTR model named denoising user-aware memory network (DUMN)
DUMN uses the representation of explicit feedback to purify the representation of implicit feedback, and effectively denoise the implicit feedback.
Experiments on two real e-commerce user behavior datasets show that DUMN has a significant improvement over the state-of-the-art baselines.
arXiv Detail & Related papers (2021-07-12T14:39:36Z) - Sparse-Interest Network for Sequential Recommendation [78.83064567614656]
We propose a novel textbfSparse textbfInterest textbfNEtwork (SINE) for sequential recommendation.
Our sparse-interest module can adaptively infer a sparse set of concepts for each user from the large concept pool.
SINE can achieve substantial improvement over state-of-the-art methods.
arXiv Detail & Related papers (2021-02-18T11:03:48Z) - Multi-Interactive Attention Network for Fine-grained Feature Learning in
CTR Prediction [48.267995749975476]
In the Click-Through Rate (CTR) prediction scenario, user's sequential behaviors are well utilized to capture the user interest.
Existing methods mostly utilize attention on the behavior of users, which is not always suitable for CTR prediction.
We propose a Multi-Interactive Attention Network (MIAN) to comprehensively extract the latent relationship among all kinds of fine-grained features.
arXiv Detail & Related papers (2020-12-13T05:46:19Z) - Dynamic Embeddings for Interaction Prediction [2.5758502140236024]
In recommender systems (RSs), predicting the next item that a user interacts with is critical for user retention.
Recent studies have shown the effectiveness of modeling the mutual interactions between users and items using separate user and item embeddings.
We propose a novel method called DeePRed that addresses some of their limitations.
arXiv Detail & Related papers (2020-11-10T16:04:46Z) - MRIF: Multi-resolution Interest Fusion for Recommendation [0.0]
This paper presents a multi-resolution interest fusion model (MRIF) that takes both properties of users' interests into consideration.
The proposed model is capable to capture the dynamic changes in users' interests at different temporal-ranges, and provides an effective way to combine a group of multi-resolution user interests to make predictions.
arXiv Detail & Related papers (2020-07-08T02:32:15Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.