LKPNR: LLM and KG for Personalized News Recommendation Framework
- URL: http://arxiv.org/abs/2308.12028v1
- Date: Wed, 23 Aug 2023 09:39:18 GMT
- Title: LKPNR: LLM and KG for Personalized News Recommendation Framework
- Authors: Chen hao, Xie Runfeng, Cui Xiangyang, Yan Zhou, Wang Xin, Xuan
Zhanwei, Zhang Kai
- Abstract summary: This research presents a novel framework that combines Large Language Models (LLM) and Knowledge Graphs (KG) into semantic representations of traditional methods.
Our method combines the information about news entities and mines high-order structural information through multiple hops in KG, thus alleviating the challenge of long tail distribution.
- Score: 4.4851420148166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurately recommending candidate news articles to users is a basic challenge
faced by personalized news recommendation systems. Traditional methods are
usually difficult to grasp the complex semantic information in news texts,
resulting in unsatisfactory recommendation results. Besides, these traditional
methods are more friendly to active users with rich historical behaviors.
However, they can not effectively solve the "long tail problem" of inactive
users. To address these issues, this research presents a novel general
framework that combines Large Language Models (LLM) and Knowledge Graphs (KG)
into semantic representations of traditional methods. In order to improve
semantic understanding in complex news texts, we use LLMs' powerful text
understanding ability to generate news representations containing rich semantic
information. In addition, our method combines the information about news
entities and mines high-order structural information through multiple hops in
KG, thus alleviating the challenge of long tail distribution. Experimental
results demonstrate that compared with various traditional models, the
framework significantly improves the recommendation effect. The successful
integration of LLM and KG in our framework has established a feasible path for
achieving more accurate personalized recommendations in the news field. Our
code is available at https://github.com/Xuan-ZW/LKPNR.
Related papers
- Personalized News Recommendation System via LLM Embedding and Co-Occurrence Patterns [6.4561443264763625]
In news recommendation (NR), systems must comprehend and process a vast amount of clicked news text to infer the probability of candidate news clicks.
In this paper, we propose a novel NR algorithm to reshape the news model via LLM Embedding and Co-Occurrence Pattern (LECOP)
Extensive experiments demonstrate the superior performance of our proposed novel method.
arXiv Detail & Related papers (2024-11-09T03:01:49Z) - Comprehending Knowledge Graphs with Large Language Models for Recommender Systems [13.270018897057293]
We propose a novel method called CoLaKG, which leverages large language models for knowledge-aware recommendation.
We first extract subgraphs centered on each item from the KG and convert them into textual inputs for the LLM.
The LLM then outputs its comprehension of these item-centered subgraphs, which are subsequently transformed into semantic embeddings.
arXiv Detail & Related papers (2024-10-16T04:44:34Z) - LLM-Powered Explanations: Unraveling Recommendations Through Subgraph Reasoning [40.53821858897774]
We introduce a novel recommender that synergies Large Language Models (LLMs) and Knowledge Graphs (KGs) to enhance the recommendation and provide interpretable results.
Our approach significantly enhances both the effectiveness and interpretability of recommender systems.
arXiv Detail & Related papers (2024-06-22T14:14:03Z) - Knowledge Graph-Enhanced Large Language Models via Path Selection [58.228392005755026]
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications.
LLMs are known to generate factually inaccurate outputs, a.k.a. the hallucination problem.
We propose a principled framework KELP with three stages to handle the above problems.
arXiv Detail & Related papers (2024-06-19T21:45:20Z) - Robust and Scalable Model Editing for Large Language Models [75.95623066605259]
We propose EREN (Edit models by REading Notes) to improve the scalability and robustness of LLM editing.
Unlike existing techniques, it can integrate knowledge from multiple edits, and correctly respond to syntactically similar but semantically unrelated inputs.
arXiv Detail & Related papers (2024-03-26T06:57:23Z) - Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems [58.561904356651276]
We introduce the Knowledge-Enhanced Entity Representation Learning (KERL) framework to improve the semantic understanding of entities for Conversational recommender systems.
KERL uses a knowledge graph and a pre-trained language model to improve the semantic understanding of entities.
KERL achieves state-of-the-art results in both recommendation and response generation tasks.
arXiv Detail & Related papers (2023-12-18T06:41:23Z) - Reformulating Sequential Recommendation: Learning Dynamic User Interest with Content-enriched Language Modeling [18.297332953450514]
We propose LANCER, which leverages the semantic understanding capabilities of pre-trained language models to generate personalized recommendations.
Our approach bridges the gap between language models and recommender systems, resulting in more human-like recommendations.
arXiv Detail & Related papers (2023-09-19T08:54:47Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - KELM: Knowledge Enhanced Pre-Trained Language Representations with
Message Passing on Hierarchical Relational Graphs [26.557447199727758]
We propose a novel knowledge-aware language model framework based on fine-tuning process.
Our model can efficiently incorporate world knowledge from KGs into existing language models such as BERT.
arXiv Detail & Related papers (2021-09-09T12:39:17Z) - CokeBERT: Contextual Knowledge Selection and Embedding towards Enhanced
Pre-Trained Language Models [103.18329049830152]
We propose a novel framework named Coke to dynamically select contextual knowledge and embed knowledge context according to textual context.
Our experimental results show that Coke outperforms various baselines on typical knowledge-driven NLP tasks.
Coke can describe the semantics of text-related knowledge in a more interpretable form than the conventional PLMs.
arXiv Detail & Related papers (2020-09-29T12:29:04Z) - Improving Conversational Recommender Systems via Knowledge Graph based
Semantic Fusion [77.21442487537139]
Conversational recommender systems (CRS) aim to recommend high-quality items to users through interactive conversations.
First, the conversation data itself lacks of sufficient contextual information for accurately understanding users' preference.
Second, there is a semantic gap between natural language expression and item-level user preference.
arXiv Detail & Related papers (2020-07-08T11:14:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.