CORONA: A Coarse-to-Fine Framework for Graph-based Recommendation with Large Language Models
- URL: http://arxiv.org/abs/2506.17281v1
- Date: Sat, 14 Jun 2025 08:20:15 GMT
- Title: CORONA: A Coarse-to-Fine Framework for Graph-based Recommendation with Large Language Models
- Authors: Junze Chen, Xinjie Yang, Cheng Yang, Junfei Bao, Zeyuan Guo, Yawen Li, Chuan Shi,
- Abstract summary: Large language models (LLMs) have shown strong capabilities across domains.<n>We propose to leverage LLMs' reasoning abilities during the candidate filtering process.<n>We introduce Chain Of Retrieval ON grAphs (CORONA) to progressively narrow down the range of candidate items.
- Score: 37.31002764910533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems (RSs) are designed to retrieve candidate items a user might be interested in from a large pool. A common approach is using graph neural networks (GNNs) to capture high-order interaction relationships. As large language models (LLMs) have shown strong capabilities across domains, researchers are exploring their use to enhance recommendation. However, prior work limits LLMs to re-ranking results or dataset augmentation, failing to utilize their power during candidate filtering - which may lead to suboptimal performance. Instead, we propose to leverage LLMs' reasoning abilities during the candidate filtering process, and introduce Chain Of Retrieval ON grAphs (CORONA) to progressively narrow down the range of candidate items on interaction graphs with the help of LLMs: (1) First, LLM performs preference reasoning based on user profiles, with the response serving as a query to extract relevant users and items from the interaction graph as preference-assisted retrieval; (2) Then, using the information retrieved in the previous step along with the purchase history of target user, LLM conducts intent reasoning to help refine an even smaller interaction subgraph as intent-assisted retrieval; (3) Finally, we employ a GNN to capture high-order collaborative filtering information from the extracted subgraph, performing GNN-enhanced retrieval to generate the final recommendation results. The proposed framework leverages the reasoning capabilities of LLMs during the retrieval process, while seamlessly integrating GNNs to enhance overall recommendation performance. Extensive experiments on various datasets and settings demonstrate that our proposed CORONA achieves state-of-the-art performance with an 18.6% relative improvement in recall and an 18.4% relative improvement in NDCG on average.
Related papers
- End-to-End Personalization: Unifying Recommender Systems with Large Language Models [0.0]
We propose a novel hybrid recommendation framework that combines Graph Attention Networks (GATs) with Large Language Models (LLMs)<n>LLMs are first used to enrich user and item representations by generating semantically meaningful profiles based on metadata such as titles, genres, and overviews.<n>We evaluate our model on benchmark datasets, including MovieLens 100k and 1M, where it consistently outperforms strong baselines.
arXiv Detail & Related papers (2025-08-02T22:46:50Z) - Distilling a Small Utility-Based Passage Selector to Enhance Retrieval-Augmented Generation [77.07879255360342]
Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating retrieved information.<n>In RAG, the emphasis has shifted to utility, which considers the usefulness of passages for generating accurate answers.<n>Our approach focuses on utility-based selection rather than ranking, enabling dynamic passage selection tailored to specific queries without the need for fixed thresholds.<n>Our experiments demonstrate that utility-based selection provides a flexible and cost-effective solution for RAG, significantly reducing computational costs while improving answer quality.
arXiv Detail & Related papers (2025-07-25T09:32:29Z) - KERAG_R: Knowledge-Enhanced Retrieval-Augmented Generation for Recommendation [8.64897967325355]
Large Language Models (LLMs) have shown strong potential in recommender systems due to their contextual learning and generalisation capabilities.<n>We propose a novel model called Knowledge-Enhanced Retrieval-Augmented Generation for Recommendation (KERAG_R)<n>Specifically, we leverage a graph retrieval-augmented generation (GraphRAG) component to integrate additional information from a knowledge graph into instructions.<n>Our experiments on three public datasets show that our proposed KERAG_R model significantly outperforms ten existing state-of-the-art recommendation methods.
arXiv Detail & Related papers (2025-07-08T10:44:27Z) - DeepRec: Towards a Deep Dive Into the Item Space with Large Language Model Based Recommendation [83.21140655248624]
Large language models (LLMs) have been introduced into recommender systems (RSs)<n>We propose DeepRec, a novel LLM-based RS that enables autonomous multi-turn interactions between LLMs and TRMs for deep exploration of the item space.<n> Experiments on public datasets demonstrate that DeepRec significantly outperforms both traditional and LLM-based baselines.
arXiv Detail & Related papers (2025-05-22T15:49:38Z) - LLM-Augmented Graph Neural Recommenders: Integrating User Reviews [2.087411180679868]
We propose a framework that employs a Graph Neural Network (GNN)-based model and an large language model (LLMs) to produce review-aware representations.<n>Our approach balances user-item interactions against text-derived features, ensuring that user's both behavioral and linguistic signals are effectively captured.
arXiv Detail & Related papers (2025-04-03T00:40:09Z) - Training Large Recommendation Models via Graph-Language Token Alignment [53.3142545812349]
We propose a novel framework to train Large Recommendation models via Graph-Language Token Alignment.<n>By aligning item and user nodes from the interaction graph with pretrained LLM tokens, GLTA effectively leverages the reasoning abilities of LLMs.<n> Furthermore, we introduce Graph-Language Logits Matching (GLLM) to optimize token alignment for end-to-end item prediction.
arXiv Detail & Related papers (2025-02-26T02:19:10Z) - RecLM: Recommendation Instruction Tuning [17.780484832381994]
We propose a model-agnostic recommendation instruction-tuning paradigm that seamlessly integrates large language models with collaborative filtering.<n>Our proposed $underlineRec$ommendation enhances the capture of user preference diversity through a carefully designed reinforcement learning reward function.
arXiv Detail & Related papers (2024-12-26T17:51:54Z) - All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - Collaboration-Aware Graph Convolutional Networks for Recommendation
Systems [14.893579746643814]
Graph Neural Networks (GNNs) have been successfully adopted in recommendation systems.
Message-passing implicitly injects collaborative effect into the embedding process.
No study has comprehensively scrutinized how message-passing captures collaborative effect.
We propose a recommendation-tailored GNN, Augmented Collaboration-Aware Graph Conal Network (CAGCN*)
arXiv Detail & Related papers (2022-07-03T18:03:46Z) - Broad Recommender System: An Efficient Nonlinear Collaborative Filtering
Approach [56.12815715932561]
We propose a new broad recommender system called Broad Collaborative Filtering (BroadCF)
Instead of Deep Neural Networks (DNNs), Broad Learning System (BLS) is used as a mapping function to learn the complex nonlinear relationships between users and items.
Extensive experiments conducted on seven benchmark datasets have confirmed the effectiveness of the proposed BroadCF algorithm.
arXiv Detail & Related papers (2022-04-20T01:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.