Hypergraph Enhanced Knowledge Tree Prompt Learning for Next-Basket
Recommendation
- URL: http://arxiv.org/abs/2312.15851v1
- Date: Tue, 26 Dec 2023 02:12:21 GMT
- Title: Hypergraph Enhanced Knowledge Tree Prompt Learning for Next-Basket
Recommendation
- Authors: Zi-Feng Mai, Chang-Dong Wang, Zhongjie Zeng, Ya Li, Jiaquan Chen,
Philip S. Yu
- Abstract summary: Next-basket recommendation (NBR) aims to infer the items in the next basket given the corresponding basket sequence.
HEKP4NBR transforms the knowledge graph (KG) into prompts, namely Knowledge Tree Prompt (KTP), to help PLM encode the Out-Of-Vocabulary (OOV) item IDs.
A hypergraph convolutional module is designed to build a hypergraph based on item similarities measured by an MoE model from multiple aspects.
- Score: 50.55786122323965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Next-basket recommendation (NBR) aims to infer the items in the next basket
given the corresponding basket sequence. Existing NBR methods are mainly based
on either message passing in a plain graph or transition modelling in a basket
sequence. However, these methods only consider point-to-point binary item
relations while item dependencies in real world scenarios are often in higher
order. Additionally, the importance of the same item to different users varies
due to variation of user preferences, and the relations between items usually
involve various aspects. As pretrained language models (PLMs) excel in multiple
tasks in natural language processing (NLP) and computer vision (CV), many
researchers have made great efforts in utilizing PLMs to boost recommendation.
However, existing PLM-based recommendation methods degrade when encountering
Out-Of-Vocabulary (OOV) items. OOV items are those whose IDs are out of PLM's
vocabulary and thus unintelligible to PLM. To settle the above challenges, we
propose a novel method HEKP4NBR, which transforms the knowledge graph (KG) into
prompts, namely Knowledge Tree Prompt (KTP), to help PLM encode the OOV item
IDs in the user's basket sequence. A hypergraph convolutional module is
designed to build a hypergraph based on item similarities measured by an MoE
model from multiple aspects and then employ convolution on the hypergraph to
model correlations among multiple items. Extensive experiments are conducted on
HEKP4NBR on two datasets based on real company data and validate its
effectiveness against multiple state-of-the-art methods.
Related papers
- Multimodal Label Relevance Ranking via Reinforcement Learning [30.03543589748649]
We introduce a novel method for multimodal label relevance ranking, named Label Relevance Ranking with Proximal Policy Optimization (LRtextsuperscript2PPO)
LRtextsuperscript2PPO first utilizes partial order pairs in the target domain to train a reward model.
We meticulously design state representation and a policy loss tailored for ranking tasks, enabling LRtextsuperscript2PPO to boost the performance of label relevance ranking model.
arXiv Detail & Related papers (2024-07-18T07:06:49Z) - ELCoRec: Enhance Language Understanding with Co-Propagation of Numerical and Categorical Features for Recommendation [38.64175351885443]
Large language models have been flourishing in the natural language processing (NLP) domain.
Despite the intelligence shown by the recommendation-oriented finetuned models, LLMs struggle to fully understand the user behavior patterns.
Existing works only fine-tune a sole LLM on given text data without introducing that important information to it.
arXiv Detail & Related papers (2024-06-27T01:37:57Z) - FLIP: Fine-grained Alignment between ID-based Models and Pretrained Language Models for CTR Prediction [49.510163437116645]
Click-through rate (CTR) prediction plays as a core function module in personalized online services.
Traditional ID-based models for CTR prediction take as inputs the one-hot encoded ID features of tabular modality.
Pretrained Language Models(PLMs) has given rise to another paradigm, which takes as inputs the sentences of textual modality.
We propose to conduct Fine-grained feature-level ALignment between ID-based Models and Pretrained Language Models(FLIP) for CTR prediction.
arXiv Detail & Related papers (2023-10-30T11:25:03Z) - Masked and Swapped Sequence Modeling for Next Novel Basket
Recommendation in Grocery Shopping [59.52585406731807]
Next basket recommendation (NBR) is the task of predicting the next set of items based on a sequence of already purchased baskets.
We formulate the next novel basket recommendation (NNBR) task, i.e., the task of recommending a basket that only consists of novel items.
arXiv Detail & Related papers (2023-08-02T17:52:37Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - Virtual Relational Knowledge Graphs for Recommendation [15.978408290522852]
We argue that it is not efficient nor effective to use every relation type for item encoding.
We first construct virtual relational graphs (VRKGs) by an unsupervised learning scheme.
We also employ the LWS mechanism on a user-item bipartite graph for user representation learning.
arXiv Detail & Related papers (2022-04-03T15:14:20Z) - Visual Transformer for Task-aware Active Learning [49.903358393660724]
We present a novel pipeline for pool-based Active Learning.
Our method exploits accessible unlabelled examples during training to estimate their co-relation with the labelled examples.
Visual Transformer models non-local visual concept dependency between labelled and unlabelled examples.
arXiv Detail & Related papers (2021-06-07T17:13:59Z) - Deep Indexed Active Learning for Matching Heterogeneous Entity
Representations [20.15233789156307]
We propose DIAL, a scalable active learning approach that jointly learns embeddings to maximize recall for blocking and accuracy for matching blocked pairs.
Experiments on five benchmark datasets and a multilingual record matching dataset show the effectiveness of our approach in terms of precision, recall and running time.
arXiv Detail & Related papers (2021-04-08T18:00:19Z) - Differentiable Reasoning over a Virtual Knowledge Base [156.94984221342716]
We consider the task of answering complex multi-hop questions using a corpus as a virtual knowledge base (KB)
In particular, we describe a neural module, DrKIT, that traverses textual data like a KB, softly following paths of relations between mentions of entities in the corpus.
DrKIT is very efficient, processing 10-100x more queries per second than existing multi-hop systems.
arXiv Detail & Related papers (2020-02-25T03:13:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.