Enhancing LLM-based Recommendation with Preference Hint Discovery from Knowledge Graph
- URL: http://arxiv.org/abs/2601.18096v1
- Date: Mon, 26 Jan 2026 03:20:42 GMT
- Title: Enhancing LLM-based Recommendation with Preference Hint Discovery from Knowledge Graph
- Authors: Yuting Zhang, Ziliang Pei, Chao Wang, Ying Sun, Fuzhen Zhuang,
- Abstract summary: We propose a preference hint discovery model based on the interaction-integrated knowledge graph.<n>We develop an instance-wise dual-attention mechanism to quantify the preference credibility of candidate attributes.
- Score: 26.065016082075264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: LLMs have garnered substantial attention in recommendation systems. Yet they fall short of traditional recommenders when capturing complex preference patterns. Recent works have tried integrating traditional recommendation embeddings into LLMs to resolve this issue, yet a core gap persists between their continuous embedding and discrete semantic spaces. Intuitively, textual attributes derived from interactions can serve as critical preference rationales for LLMs' recommendation logic. However, directly inputting such attribute knowledge presents two core challenges: (1) Deficiency of sparse interactions in reflecting preference hints for unseen items; (2) Substantial noise introduction from treating all attributes as hints. To this end, we propose a preference hint discovery model based on the interaction-integrated knowledge graph, enhancing LLM-based recommendation. It utilizes traditional recommendation principles to selectively extract crucial attributes as hints. Specifically, we design a collaborative preference hint extraction schema, which utilizes semantic knowledge from similar users' explicit interactions as hints for unseen items. Furthermore, we develop an instance-wise dual-attention mechanism to quantify the preference credibility of candidate attributes, identifying hints specific to each unseen item. Using these item- and user-based hints, we adopt a flattened hint organization method to shorten input length and feed the textual hint information to the LLM for commonsense reasoning. Extensive experiments on both pair-wise and list-wise recommendation tasks verify the effectiveness of our proposed framework, indicating an average relative improvement of over 3.02% against baselines.
Related papers
- MR.Rec: Synergizing Memory and Reasoning for Personalized Recommendation Assistant with LLMs [23.593398623128735]
MR.Rec is a novel framework that synergizes memory and reasoning for Large Language Models (LLMs)-based recommendations.<n>To achieve personalization, we develop a comprehensive Retrieval-Augmented Generation (RAG) system that efficiently indexes and retrieves relevant external memory.<n>By combining dynamic memory retrieval with adaptive reasoning, this approach ensures more accurate, context-aware, and highly personalized recommendations.
arXiv Detail & Related papers (2025-10-16T12:40:48Z) - AgentDR Dynamic Recommendation with Implicit Item-Item Relations via LLM-based Agents [42.177723613925146]
We propose a novel LLM-agent framework, AgenDR, which bridges LLM reasoning with scalable recommendation tools.<n>Our approach delegates full-ranking tasks to traditional models while utilizing LLMs to integrate multiple recommendation outputs.<n>We show that our framework achieves superior full-ranking performance, yielding on average a twofold improvement over its underlying tools.
arXiv Detail & Related papers (2025-10-07T05:48:05Z) - Towards Comprehensible Recommendation with Large Language Model Fine-tuning [41.218487308635126]
We propose a novel Content Understanding from a Collaborative Perspective framework (CURec) for recommendation systems.<n>Curec generates collaborative-aligned content features for more comprehensive recommendations.<n>Experiments on public benchmarks demonstrate the superiority of CURec over existing methods.
arXiv Detail & Related papers (2025-08-11T03:55:31Z) - KERAG_R: Knowledge-Enhanced Retrieval-Augmented Generation for Recommendation [8.64897967325355]
Large Language Models (LLMs) have shown strong potential in recommender systems due to their contextual learning and generalisation capabilities.<n>We propose a novel model called Knowledge-Enhanced Retrieval-Augmented Generation for Recommendation (KERAG_R)<n>Specifically, we leverage a graph retrieval-augmented generation (GraphRAG) component to integrate additional information from a knowledge graph into instructions.<n>Our experiments on three public datasets show that our proposed KERAG_R model significantly outperforms ten existing state-of-the-art recommendation methods.
arXiv Detail & Related papers (2025-07-08T10:44:27Z) - R$^2$ec: Towards Large Recommender Models with Reasoning [59.32598867813266]
We propose R$2$ec, a unified large recommender model with intrinsic reasoning capability.<n>R$2$ec introduces a dual-head architecture that supports both reasoning chain generation and efficient item prediction in a single model.<n>To overcome the lack of annotated reasoning data, we design RecPO, a reinforcement learning framework.
arXiv Detail & Related papers (2025-05-22T17:55:43Z) - Graph Retrieval-Augmented LLM for Conversational Recommendation Systems [52.35491420330534]
G-CRS (Graph Retrieval-Augmented Large Language Model for Conversational Recommender Systems) is a training-free framework that combines graph retrieval-augmented generation and in-context learning.<n>G-CRS achieves superior recommendation performance compared to existing methods without requiring task-specific training.
arXiv Detail & Related papers (2025-03-09T03:56:22Z) - EAGER-LLM: Enhancing Large Language Models as Recommenders through Exogenous Behavior-Semantic Integration [60.47645731801866]
Large language models (LLMs) are increasingly leveraged as foundational backbones in advanced recommender systems.<n>LLMs are pre-trained linguistic semantics but learn collaborative semantics from scratch via the llm-Backbone.<n>We propose EAGER-LLM, a decoder-only generative recommendation framework that integrates endogenous and endogenous behavioral and semantic information in a non-intrusive manner.
arXiv Detail & Related papers (2025-02-20T17:01:57Z) - Improving LLM-powered Recommendations with Personalized Information [29.393390011083895]
We propose a pipeline called CoT-Rec, which integrates two key Chain-of-Thought processes into LLM-powered recommendations.<n>CoT-Rec consists of two stages: (1) personalized information extraction, and (2) personalized information utilization.<n> Experimental results demonstrate that CoT-Rec shows potential for improving LLM-powered recommendations.
arXiv Detail & Related papers (2025-02-19T16:08:17Z) - Reasoning over User Preferences: Knowledge Graph-Augmented LLMs for Explainable Conversational Recommendations [58.61021630938566]
Conversational Recommender Systems (CRSs) aim to provide personalized recommendations by capturing user preferences through interactive dialogues.<n>Current CRSs often leverage knowledge graphs (KGs) or language models to extract and represent user preferences as latent vectors, which limits their explainability.<n>We propose a plug-and-play framework that synergizes LLMs and KGs to reason over user preferences, enhancing the performance and explainability of existing CRSs.
arXiv Detail & Related papers (2024-11-16T11:47:21Z) - Laser: Parameter-Efficient LLM Bi-Tuning for Sequential Recommendation with Collaborative Information [76.62949982303532]
We propose a parameter-efficient Large Language Model Bi-Tuning framework for sequential recommendation with collaborative information (Laser)
In our Laser, the prefix is utilized to incorporate user-item collaborative information and adapt the LLM to the recommendation task, while the suffix converts the output embeddings of the LLM from the language space to the recommendation space for the follow-up item recommendation.
M-Former is a lightweight MoE-based querying transformer that uses a set of query experts to integrate diverse user-specific collaborative information encoded by frozen ID-based sequential recommender systems.
arXiv Detail & Related papers (2024-09-03T04:55:03Z) - Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment [72.99676237703099]
We propose a new framework that boosts the alignment of large language models with human preferences.<n>Our key idea is leveraging the human prior knowledge within the small (seed) data.<n>We introduce a noise-aware preference learning algorithm to mitigate the risk of low quality within generated preference data.
arXiv Detail & Related papers (2024-06-06T18:01:02Z) - Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts [95.09994361995389]
Relative Preference Optimization (RPO) is designed to discern between more and less preferred responses derived from both identical and related prompts.
RPO has demonstrated a superior ability to align large language models with user preferences and to improve their adaptability during the training process.
arXiv Detail & Related papers (2024-02-12T22:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.