LKPNR: LLM and KG for Personalized News Recommendation Framework
- URL: http://arxiv.org/abs/2308.12028v1
- Date: Wed, 23 Aug 2023 09:39:18 GMT
- Title: LKPNR: LLM and KG for Personalized News Recommendation Framework
- Authors: Chen hao, Xie Runfeng, Cui Xiangyang, Yan Zhou, Wang Xin, Xuan
Zhanwei, Zhang Kai
- Abstract summary: This research presents a novel framework that combines Large Language Models (LLM) and Knowledge Graphs (KG) into semantic representations of traditional methods.
Our method combines the information about news entities and mines high-order structural information through multiple hops in KG, thus alleviating the challenge of long tail distribution.
- Score: 4.4851420148166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurately recommending candidate news articles to users is a basic challenge
faced by personalized news recommendation systems. Traditional methods are
usually difficult to grasp the complex semantic information in news texts,
resulting in unsatisfactory recommendation results. Besides, these traditional
methods are more friendly to active users with rich historical behaviors.
However, they can not effectively solve the "long tail problem" of inactive
users. To address these issues, this research presents a novel general
framework that combines Large Language Models (LLM) and Knowledge Graphs (KG)
into semantic representations of traditional methods. In order to improve
semantic understanding in complex news texts, we use LLMs' powerful text
understanding ability to generate news representations containing rich semantic
information. In addition, our method combines the information about news
entities and mines high-order structural information through multiple hops in
KG, thus alleviating the challenge of long tail distribution. Experimental
results demonstrate that compared with various traditional models, the
framework significantly improves the recommendation effect. The successful
integration of LLM and KG in our framework has established a feasible path for
achieving more accurate personalized recommendations in the news field. Our
code is available at https://github.com/Xuan-ZW/LKPNR.
Related papers
- Boosting Knowledge Graph-based Recommendations through Confidence-Aware Augmentation with Large Language Models [19.28217321004791]
Large Language Models (LLMs) offer a promising way to improve the quality and relevance of Knowledge Graphs for recommendation tasks.
We propose the Confidence-aware KG-based Recommendation Framework with LLM Augmentation (CKG-LLMA), a novel framework that combines KGs and LLMs for recommendation task.
The framework includes: (1) an LLM-based subgraph augmenter for enriching KGs with high-quality information, (2) a confidence-aware message propagation mechanism to filter noisy triplets, and (3) a dual-view contrastive learning method to integrate user-item interactions and KG data.
arXiv Detail & Related papers (2025-02-06T02:06:48Z) - LLM is Knowledge Graph Reasoner: LLM's Intuition-aware Knowledge Graph Reasoning for Cold-start Sequential Recommendation [47.34949656215159]
Large Language Models (LLMs) can be considered databases with a wealth of knowledge learned from the web data.
We propose a LLM's Intuition-aware Knowledge graph Reasoning model (LIKR)
Our model outperforms state-of-the-art recommendation methods in cold-start sequential recommendation scenarios.
arXiv Detail & Related papers (2024-12-17T01:52:15Z) - Unveiling User Preferences: A Knowledge Graph and LLM-Driven Approach for Conversational Recommendation [55.5687800992432]
We propose a plug-and-play framework that synergizes Large Language Models (LLMs) and Knowledge Graphs (KGs) to unveil user preferences.
This enables the LLM to transform KG entities into concise natural language descriptions, allowing them to comprehend domain-specific knowledge.
arXiv Detail & Related papers (2024-11-16T11:47:21Z) - Personalized News Recommendation System via LLM Embedding and Co-Occurrence Patterns [6.4561443264763625]
In news recommendation (NR), systems must comprehend and process a vast amount of clicked news text to infer the probability of candidate news clicks.
In this paper, we propose a novel NR algorithm to reshape the news model via LLM Embedding and Co-Occurrence Pattern (LECOP)
Extensive experiments demonstrate the superior performance of our proposed novel method.
arXiv Detail & Related papers (2024-11-09T03:01:49Z) - Comprehending Knowledge Graphs with Large Language Models for Recommender Systems [13.270018897057293]
We propose a novel method called CoLaKG to improve knowledge graphs.
CoLaKG uses large language models (LLMs) to improve KG-based recommendations.
arXiv Detail & Related papers (2024-10-16T04:44:34Z) - Knowledge Graph-Enhanced Large Language Models via Path Selection [58.228392005755026]
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications.
LLMs are known to generate factually inaccurate outputs, a.k.a. the hallucination problem.
We propose a principled framework KELP with three stages to handle the above problems.
arXiv Detail & Related papers (2024-06-19T21:45:20Z) - Robust and Scalable Model Editing for Large Language Models [75.95623066605259]
We propose EREN (Edit models by REading Notes) to improve the scalability and robustness of LLM editing.
Unlike existing techniques, it can integrate knowledge from multiple edits, and correctly respond to syntactically similar but semantically unrelated inputs.
arXiv Detail & Related papers (2024-03-26T06:57:23Z) - Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems [58.561904356651276]
We introduce the Knowledge-Enhanced Entity Representation Learning (KERL) framework to improve the semantic understanding of entities for Conversational recommender systems.
KERL uses a knowledge graph and a pre-trained language model to improve the semantic understanding of entities.
KERL achieves state-of-the-art results in both recommendation and response generation tasks.
arXiv Detail & Related papers (2023-12-18T06:41:23Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - KELM: Knowledge Enhanced Pre-Trained Language Representations with
Message Passing on Hierarchical Relational Graphs [26.557447199727758]
We propose a novel knowledge-aware language model framework based on fine-tuning process.
Our model can efficiently incorporate world knowledge from KGs into existing language models such as BERT.
arXiv Detail & Related papers (2021-09-09T12:39:17Z) - CokeBERT: Contextual Knowledge Selection and Embedding towards Enhanced
Pre-Trained Language Models [103.18329049830152]
We propose a novel framework named Coke to dynamically select contextual knowledge and embed knowledge context according to textual context.
Our experimental results show that Coke outperforms various baselines on typical knowledge-driven NLP tasks.
Coke can describe the semantics of text-related knowledge in a more interpretable form than the conventional PLMs.
arXiv Detail & Related papers (2020-09-29T12:29:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.