LLM-Augmented Graph Neural Recommenders: Integrating User Reviews
- URL: http://arxiv.org/abs/2504.02195v1
- Date: Thu, 03 Apr 2025 00:40:09 GMT
- Title: LLM-Augmented Graph Neural Recommenders: Integrating User Reviews
- Authors: Hiroki Kanezashi, Toyotaro Suzumura, Cade Reid, Md Mostafizur Rahman, Yu Hirate,
- Abstract summary: We propose a framework that employs a Graph Neural Network (GNN)-based model and an large language model (LLMs) to produce review-aware representations.<n>Our approach balances user-item interactions against text-derived features, ensuring that user's both behavioral and linguistic signals are effectively captured.
- Score: 2.087411180679868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommender systems increasingly aim to combine signals from both user reviews and purchase (or other interaction) behaviors. While user-written comments provide explicit insights about preferences, merging these textual representations from large language models (LLMs) with graph-based embeddings of user actions remains a challenging task. In this work, we propose a framework that employs both a Graph Neural Network (GNN)-based model and an LLM to produce review-aware representations, preserving review semantics while mitigating textual noise. Our approach utilizes a hybrid objective that balances user-item interactions against text-derived features, ensuring that user's both behavioral and linguistic signals are effectively captured. We evaluate this method on multiple datasets from diverse application domains, demonstrating consistent improvements over a baseline GNN-based recommender model. Notably, our model achieves significant gains in recommendation accuracy when review data is sparse or unevenly distributed. These findings highlight the importance of integrating LLM-driven textual feedback with GNN-derived user behavioral patterns to develop robust, context-aware recommender systems.
Related papers
- Training Large Recommendation Models via Graph-Language Token Alignment [53.3142545812349]
We propose a novel framework to train Large Recommendation models via Graph-Language Token Alignment.<n>By aligning item and user nodes from the interaction graph with pretrained LLM tokens, GLTA effectively leverages the reasoning abilities of LLMs.<n> Furthermore, we introduce Graph-Language Logits Matching (GLLM) to optimize token alignment for end-to-end item prediction.
arXiv Detail & Related papers (2025-02-26T02:19:10Z) - RecLM: Recommendation Instruction Tuning [17.780484832381994]
We propose a model-agnostic recommendation instruction-tuning paradigm that seamlessly integrates large language models with collaborative filtering.
Our proposed $underlineRec$ommendation enhances the capture of user preference diversity through a carefully designed reinforcement learning reward function.
arXiv Detail & Related papers (2024-12-26T17:51:54Z) - Enhancing Collaborative Semantics of Language Model-Driven Recommendations via Graph-Aware Learning [10.907949155931474]
Large Language Models (LLMs) are increasingly prominent in the recommendation systems domain.
Gal-Rec enhances the understanding of user-item collaborative semantics by imitating the intent of Graph Neural Networks (GNNs)
Gal-Rec significantly enhances the comprehension of collaborative semantics, and improves recommendation performance.
arXiv Detail & Related papers (2024-06-19T05:50:15Z) - RecExplainer: Aligning Large Language Models for Explaining Recommendation Models [50.74181089742969]
Large language models (LLMs) have demonstrated remarkable intelligence in understanding, reasoning, and instruction following.
This paper presents the initial exploration of using LLMs as surrogate models to explain black-box recommender models.
To facilitate an effective alignment, we introduce three methods: behavior alignment, intention alignment, and hybrid alignment.
arXiv Detail & Related papers (2023-11-18T03:05:43Z) - Representation Learning with Large Language Models for Recommendation [33.040389989173825]
We propose a model-agnostic framework RLMRec to enhance recommenders with large language models (LLMs)empowered representation learning.<n>RLMRec incorporates auxiliary textual signals, develops a user/item profiling paradigm empowered by LLMs, and aligns the semantic space of LLMs with the representation space of collaborative relational signals.
arXiv Detail & Related papers (2023-10-24T15:51:13Z) - Graph Neural Bandits [49.85090929163639]
We propose a framework named Graph Neural Bandits (GNB) to leverage the collaborative nature among users empowered by graph neural networks (GNNs)
To refine the recommendation strategy, we utilize separate GNN-based models on estimated user graphs for exploitation and adaptive exploration.
arXiv Detail & Related papers (2023-08-21T15:57:57Z) - GUESR: A Global Unsupervised Data-Enhancement with Bucket-Cluster
Sampling for Sequential Recommendation [58.6450834556133]
We propose graph contrastive learning to enhance item representations with complex associations from the global view.
We extend the CapsNet module with the elaborately introduced target-attention mechanism to derive users' dynamic preferences.
Our proposed GUESR could not only achieve significant improvements but also could be regarded as a general enhancement strategy.
arXiv Detail & Related papers (2023-03-01T05:46:36Z) - Ordinal Graph Gamma Belief Network for Social Recommender Systems [54.9487910312535]
We develop a hierarchical Bayesian model termed ordinal graph factor analysis (OGFA), which jointly models user-item and user-user interactions.
OGFA not only achieves good recommendation performance, but also extracts interpretable latent factors corresponding to representative user preferences.
We extend OGFA to ordinal graph gamma belief network, which is a multi-stochastic-layer deep probabilistic model.
arXiv Detail & Related papers (2022-09-12T09:19:22Z) - Self-Supervised Hypergraph Transformer for Recommender Systems [25.07482350586435]
Self-Supervised Hypergraph Transformer (SHT)
Self-Supervised Hypergraph Transformer (SHT)
Cross-view generative self-supervised learning component is proposed for data augmentation over the user-item interaction graph.
arXiv Detail & Related papers (2022-07-28T18:40:30Z) - Learning Intents behind Interactions with Knowledge Graph for
Recommendation [93.08709357435991]
Knowledge graph (KG) plays an increasingly important role in recommender systems.
Existing GNN-based models fail to identify user-item relation at a fine-grained level of intents.
We propose a new model, Knowledge Graph-based Intent Network (KGIN)
arXiv Detail & Related papers (2021-02-14T03:21:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.