Enriching Semantic Profiles into Knowledge Graph for Recommender Systems Using Large Language Models
- URL: http://arxiv.org/abs/2601.08148v1
- Date: Tue, 13 Jan 2026 02:23:32 GMT
- Title: Enriching Semantic Profiles into Knowledge Graph for Recommender Systems Using Large Language Models
- Authors: Seokho Ahn, Sungbok Shin, Young-Duk Seo,
- Abstract summary: We argue that large language models (LLMs) are effective at extracting compressed rationales from diverse knowledge sources, while knowledge graphs (KGs) are better suited for propagating these profiles to extend their reach.<n> Building on this insight, we propose a new recommendation model, called SPiKE.<n>In experiments, we demonstrate that SPiKE consistently outperforms state-of-the-art KG- and LLM-based recommenders in real-world settings.
- Score: 7.054093620465398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rich and informative profiling to capture user preferences is essential for improving recommendation quality. However, there is still no consensus on how best to construct and utilize such profiles. To address this, we revisit recent profiling-based approaches in recommender systems along four dimensions: 1) knowledge base, 2) preference indicator, 3) impact range, and 4) subject. We argue that large language models (LLMs) are effective at extracting compressed rationales from diverse knowledge sources, while knowledge graphs (KGs) are better suited for propagating these profiles to extend their reach. Building on this insight, we propose a new recommendation model, called SPiKE. SPiKE consists of three core components: i) Entity profile generation, which uses LLMs to generate semantic profiles for all KG entities; ii) Profile-aware KG aggregation, which integrates these profiles into the KG; and iii) Pairwise profile preference matching, which aligns LLM- and KG-based representations during training. In experiments, we demonstrate that SPiKE consistently outperforms state-of-the-art KG- and LLM-based recommenders in real-world settings.
Related papers
- LettinGo: Explore User Profile Generation for Recommendation System [45.40232561275015]
We introduce LettinGo, a novel framework for generating diverse and adaptive user profiles.<n>Our framework significantly enhances recommendation accuracy, flexibility, and contextual awareness.
arXiv Detail & Related papers (2025-06-23T05:51:52Z) - AIR: A Systematic Analysis of Annotations, Instructions, and Response Pairs in Preference Dataset [89.37514696019484]
Preference learning is critical for aligning large language models with human values.<n>Our work shifts preference dataset design from ad hoc scaling to component-aware optimization.
arXiv Detail & Related papers (2025-04-04T17:33:07Z) - Boosting Knowledge Graph-based Recommendations through Confidence-Aware Augmentation with Large Language Models [19.28217321004791]
Large Language Models (LLMs) offer a promising way to improve the quality and relevance of Knowledge Graphs for recommendation tasks.<n>We propose the Confidence-aware KG-based Recommendation Framework with LLM Augmentation (CKG-LLMA), a novel framework that combines KGs and LLMs for recommendation task.<n>The framework includes: (1) an LLM-based subgraph augmenter for enriching KGs with high-quality information, (2) a confidence-aware message propagation mechanism to filter noisy triplets, and (3) a dual-view contrastive learning method to integrate user-item interactions and KG data.
arXiv Detail & Related papers (2025-02-06T02:06:48Z) - Reasoning over User Preferences: Knowledge Graph-Augmented LLMs for Explainable Conversational Recommendations [58.61021630938566]
Conversational Recommender Systems (CRSs) aim to provide personalized recommendations by capturing user preferences through interactive dialogues.<n>Current CRSs often leverage knowledge graphs (KGs) or language models to extract and represent user preferences as latent vectors, which limits their explainability.<n>We propose a plug-and-play framework that synergizes LLMs and KGs to reason over user preferences, enhancing the performance and explainability of existing CRSs.
arXiv Detail & Related papers (2024-11-16T11:47:21Z) - Knowledge Graph Enhanced Language Agents for Recommendation [29.20755410537792]
We propose a framework that unifies language agents and Knowledge Graphs for recommendation systems.<n>In the simulated recommendation scenario, we position the user and item within the KG and integrate KG paths as natural language descriptions.<n>This allows language agents to interact with each other and discover sufficient rationale behind their interactions.
arXiv Detail & Related papers (2024-10-25T15:25:36Z) - A Prompting-Based Representation Learning Method for Recommendation with Large Language Models [2.1161973970603998]
We introduce the Prompting-Based Representation Learning Method for Recommendation (P4R) to boost the linguistic abilities of Large Language Models (LLMs) in Recommender Systems.
In our P4R framework, we utilize the LLM prompting strategy to create personalized item profiles.
In our evaluation, we compare P4R with state-of-the-art Recommender models and assess the quality of prompt-based profile generation.
arXiv Detail & Related papers (2024-09-25T07:06:14Z) - Bridging LLMs and KGs without Fine-Tuning: Intermediate Probing Meets Subgraph-Aware Entity Descriptions [49.36683223327633]
Large Language Models (LLMs) encapsulate extensive world knowledge and exhibit powerful context modeling capabilities.<n>We propose a novel framework that synergizes the strengths of LLMs with robust knowledge representation to enable effective and efficient KGC.<n>We achieve a 47% relative improvement over previous methods based on non-fine-tuned LLMs and, to our knowledge, are the first to achieve classification performance comparable to fine-tuned LLMs.
arXiv Detail & Related papers (2024-08-13T10:15:55Z) - Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment [72.99676237703099]
We propose a new framework that boosts the alignment of large language models with human preferences.<n>Our key idea is leveraging the human prior knowledge within the small (seed) data.<n>We introduce a noise-aware preference learning algorithm to mitigate the risk of low quality within generated preference data.
arXiv Detail & Related papers (2024-06-06T18:01:02Z) - On the Sweet Spot of Contrastive Views for Knowledge-enhanced
Recommendation [49.18304766331156]
We propose a new contrastive learning framework for KG-enhanced recommendation.
We construct two separate contrastive views for KG and IG, and maximize their mutual information.
Extensive experimental results on three real-world datasets demonstrate the effectiveness and efficiency of our method.
arXiv Detail & Related papers (2023-09-23T14:05:55Z) - DSKReG: Differentiable Sampling on Knowledge Graph for Recommendation
with Relational GNN [59.160401038969795]
We propose differentiable sampling on Knowledge Graph for Recommendation with GNN (DSKReG)
We devise a differentiable sampling strategy, which enables the selection of relevant items to be jointly optimized with the model training procedure.
The experimental results demonstrate that our model outperforms state-of-the-art KG-based recommender systems.
arXiv Detail & Related papers (2021-08-26T16:19:59Z) - Knowledge-Enhanced Top-K Recommendation in Poincar\'e Ball [33.90069123451581]
We propose a recommendation model in the hyperbolic space, which facilitates the learning of the hierarchical structure of knowledge graphs.
A hyperbolic attention network is employed to determine the relative importances of neighboring entities of a certain item.
We show that the proposed model outperforms the best existing models by 2-16% in terms of NDCG@K on Top-K recommendation.
arXiv Detail & Related papers (2021-01-13T03:16:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.