Comprehending Knowledge Graphs with Large Language Models for Recommender Systems
- URL: http://arxiv.org/abs/2410.12229v2
- Date: Fri, 07 Feb 2025 09:08:17 GMT
- Title: Comprehending Knowledge Graphs with Large Language Models for Recommender Systems
- Authors: Ziqiang Cui, Yunpeng Weng, Xing Tang, Fuyuan Lyu, Dugang Liu, Xiuqiang He, Chen Ma,
- Abstract summary: We propose a novel method called CoLaKG to improve knowledge graphs.
CoLaKG uses large language models (LLMs) to improve KG-based recommendations.
- Score: 13.270018897057293
- License:
- Abstract: In recent years, the introduction of knowledge graphs (KGs) has significantly advanced recommender systems by facilitating the discovery of potential associations between items. However, existing methods still face several limitations. First, most KGs suffer from missing facts or limited scopes. Second, existing methods convert textual information in KGs into IDs, resulting in the loss of natural semantic connections between different items. Third, existing methods struggle to capture high-order connections in the global KG. To address these limitations, we propose a novel method called CoLaKG, which leverages large language models (LLMs) to improve KG-based recommendations. The extensive world knowledge and remarkable reasoning capabilities of LLMs enable our method to supplement missing facts in KGs. Additionally, their powerful text understanding abilities allow for better utilization of semantic information. Specifically, CoLaKG extracts useful information from the KG at both local and global levels. By employing item-centered subgraph extraction and prompt engineering, it accurately captures the local KG. Subsequently, through retrieval-based neighbor enhancement, it supplements the current item by capturing related items from the entire KG, thereby effectively utilizing global information. The local and global information extracted by the LLM are effectively integrated into the recommendation model through a representation fusion module and a retrieval-augmented representation learning module, respectively, thereby improving recommendation performance. Extensive experiments on four real-world datasets demonstrate the superiority of our method.
Related papers
- GLTW: Joint Improved Graph Transformer and LLM via Three-Word Language for Knowledge Graph Completion [52.026016846945424]
We propose a new method called GLTW, which encodes the structural information of KGs and merges it with Large Language Models.
Specifically, we introduce an improved Graph Transformer (iGT) that effectively encodes subgraphs with both local and global structural information.
Also, we develop a subgraph-based multi-classification training objective, using all entities within KG as classification objects, to boost learning efficiency.
arXiv Detail & Related papers (2025-02-17T06:02:59Z) - Boosting Knowledge Graph-based Recommendations through Confidence-Aware Augmentation with Large Language Models [19.28217321004791]
Large Language Models (LLMs) offer a promising way to improve the quality and relevance of Knowledge Graphs for recommendation tasks.
We propose the Confidence-aware KG-based Recommendation Framework with LLM Augmentation (CKG-LLMA), a novel framework that combines KGs and LLMs for recommendation task.
The framework includes: (1) an LLM-based subgraph augmenter for enriching KGs with high-quality information, (2) a confidence-aware message propagation mechanism to filter noisy triplets, and (3) a dual-view contrastive learning method to integrate user-item interactions and KG data.
arXiv Detail & Related papers (2025-02-06T02:06:48Z) - Leveraging Large Language Models for Semantic Query Processing in a Scholarly Knowledge Graph [1.7418328181959968]
The proposed research aims to develop an innovative semantic query processing system.
It enables users to obtain comprehensive information about research works produced by Computer Science (CS) researchers at the Australian National University.
arXiv Detail & Related papers (2024-05-24T09:19:45Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - FedMKGC: Privacy-Preserving Federated Multilingual Knowledge Graph
Completion [21.4302940596294]
Knowledge graph completion (KGC) aims to predict missing facts in knowledge graphs (KGs)
Previous methods that rely on transferring raw data among KGs raise privacy concerns.
We propose a new federated learning framework that implicitly aggregates knowledge from multiple KGs without demanding raw data exchange and entity alignment.
arXiv Detail & Related papers (2023-12-17T08:09:27Z) - On the Sweet Spot of Contrastive Views for Knowledge-enhanced
Recommendation [49.18304766331156]
We propose a new contrastive learning framework for KG-enhanced recommendation.
We construct two separate contrastive views for KG and IG, and maximize their mutual information.
Extensive experimental results on three real-world datasets demonstrate the effectiveness and efficiency of our method.
arXiv Detail & Related papers (2023-09-23T14:05:55Z) - Unifying Large Language Models and Knowledge Graphs: A Roadmap [61.824618473293725]
Large language models (LLMs) are making new waves in the field of natural language processing and artificial intelligence.
Knowledge Graphs (KGs), Wikipedia and Huapu for example, are structured knowledge models that explicitly store rich factual knowledge.
arXiv Detail & Related papers (2023-06-14T07:15:26Z) - Collective Knowledge Graph Completion with Mutual Knowledge Distillation [11.922522192224145]
We study the problem of multi-KG completion, where we focus on maximizing the collective knowledge from different KGs.
We propose a novel method called CKGC-CKD that uses relation-aware graph convolutional network encoder models on both individual KGs and a large fused KG.
Experimental results on multilingual datasets have shown that our method outperforms all state-of-the-art models in the KGC task.
arXiv Detail & Related papers (2023-05-25T09:49:40Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.