Modeling Personalized Item Frequency Information for Next-basket
Recommendation
- URL: http://arxiv.org/abs/2006.00556v1
- Date: Sun, 31 May 2020 16:42:39 GMT
- Title: Modeling Personalized Item Frequency Information for Next-basket
Recommendation
- Authors: Haoji Hu and Xiangnan He and Jinyang Gao and Zhi-Li Zhang
- Abstract summary: Next-basket recommendation (NBR) is prevalent in e-commerce and retail industry.
We argue that existing RNNs cannot directly capture item frequency information in the recommendation scenario.
We propose a simple item frequency based k-nearest neighbors (kNN) method to directly utilize these critical signals.
- Score: 63.94555438898309
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Next-basket recommendation (NBR) is prevalent in e-commerce and retail
industry. In this scenario, a user purchases a set of items (a basket) at a
time. NBR performs sequential modeling and recommendation based on a sequence
of baskets. NBR is in general more complex than the widely studied sequential
(session-based) recommendation which recommends the next item based on a
sequence of items. Recurrent neural network (RNN) has proved to be very
effective for sequential modeling and thus been adapted for NBR. However, we
argue that existing RNNs cannot directly capture item frequency information in
the recommendation scenario.
Through careful analysis of real-world datasets, we find that {\em
personalized item frequency} (PIF) information (which records the number of
times that each item is purchased by a user) provides two critical signals for
NBR. But, this has been largely ignored by existing methods. Even though
existing methods such as RNN based methods have strong representation ability,
our empirical results show that they fail to learn and capture PIF. As a
result, existing methods cannot fully exploit the critical signals contained in
PIF. Given this inherent limitation of RNNs, we propose a simple item frequency
based k-nearest neighbors (kNN) method to directly utilize these critical
signals. We evaluate our method on four public real-world datasets. Despite its
relative simplicity, our method frequently outperforms the state-of-the-art NBR
methods -- including deep learning based methods using RNNs -- when patterns
associated with PIF play an important role in the data.
Related papers
- Hypergraph Enhanced Knowledge Tree Prompt Learning for Next-Basket
Recommendation [50.55786122323965]
Next-basket recommendation (NBR) aims to infer the items in the next basket given the corresponding basket sequence.
HEKP4NBR transforms the knowledge graph (KG) into prompts, namely Knowledge Tree Prompt (KTP), to help PLM encode the Out-Of-Vocabulary (OOV) item IDs.
A hypergraph convolutional module is designed to build a hypergraph based on item similarities measured by an MoE model from multiple aspects.
arXiv Detail & Related papers (2023-12-26T02:12:21Z) - Large-scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery [76.63807209414789]
We challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly.
We propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios.
arXiv Detail & Related papers (2023-03-28T13:47:16Z) - Effective and Efficient Training for Sequential Recommendation using
Recency Sampling [91.02268704681124]
We propose a novel Recency-based Sampling of Sequences training objective.
We show that the models enhanced with our method can achieve performances exceeding or very close to stateof-the-art BERT4Rec.
arXiv Detail & Related papers (2022-07-06T13:06:31Z) - WSLRec: Weakly Supervised Learning for Neural Sequential Recommendation
Models [24.455665093145818]
We propose a novel model-agnostic training approach called WSLRec, which adopts a three-stage framework: pre-training, top-$k$ mining, intrinsic and fine-tuning.
WSLRec resolves the incompleteness problem by pre-training models on extra weak supervisions from model-free methods like BR and ItemCF, while resolving the inaccuracy problem by leveraging the top-$k$ mining to screen out reliable user-item relevance from weak supervisions for fine-tuning.
arXiv Detail & Related papers (2022-02-28T08:55:12Z) - Filter-enhanced MLP is All You Need for Sequential Recommendation [89.0974365344997]
In online platforms, logged user behavior data is inevitable to contain noise.
We borrow the idea of filtering algorithms from signal processing that attenuates the noise in the frequency domain.
We propose textbfFMLP-Rec, an all-MLP model with learnable filters for sequential recommendation task.
arXiv Detail & Related papers (2022-02-28T05:49:35Z) - A Next Basket Recommendation Reality Check [48.29308926607474]
The goal of a next basket recommendation (NBR) system is to recommend items for the next basket for a user, based on the sequence of their prior baskets.
We provide a novel angle on the evaluation of next basket recommendation methods, centered on the distinction between repetition and exploration.
We propose a set of metrics that measure the repeat/explore ratio and performance of NBR models.
arXiv Detail & Related papers (2021-09-29T07:14:22Z) - Being a Bit Frequentist Improves Bayesian Neural Networks [76.73339435080446]
We show that OOD-trained BNNs are competitive to, if not better than recent frequentist baselines.
This work provides strong baselines for future work in both Bayesian and frequentist UQ.
arXiv Detail & Related papers (2021-06-18T11:22:42Z) - Counterfactual Explanations for Neural Recommenders [10.880181451789266]
We propose ACCENT, the first general framework for finding counterfactual explanations for neural recommenders.
We use ACCENT to generate counterfactual explanations for two popular neural models.
arXiv Detail & Related papers (2021-05-11T13:16:18Z) - RetaGNN: Relational Temporal Attentive Graph Neural Networks for
Holistic Sequential Recommendation [11.62499965678381]
Sequential recommendation (SR) is to accurately recommend a list of items for a user based on her current accessed ones.
We propose a novel deep learning-based model, relation Temporal Attentive Graph Neural Networks (RetaGNN) for holistic SR.
arXiv Detail & Related papers (2021-01-29T08:08:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.