Towards Comprehensible Recommendation with Large Language Model Fine-tuning
- URL: http://arxiv.org/abs/2508.07595v1
- Date: Mon, 11 Aug 2025 03:55:31 GMT
- Title: Towards Comprehensible Recommendation with Large Language Model Fine-tuning
- Authors: Yunze Luo, Yinjie Jiang, Gaode Chen, Xinghua Zhang, Jun Zhang, Jian Liang, Kaigui Bian,
- Abstract summary: We propose a novel Content Understanding from a Collaborative Perspective framework (CURec) for recommendation systems.<n>Curec generates collaborative-aligned content features for more comprehensive recommendations.<n>Experiments on public benchmarks demonstrate the superiority of CURec over existing methods.
- Score: 41.218487308635126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommender systems have become increasingly ubiquitous in daily life. While traditional recommendation approaches primarily rely on ID-based representations or item-side content features, they often fall short in capturing the underlying semantics aligned with user preferences (e.g., recommendation reasons for items), leading to a semantic-collaborative gap. Recently emerged LLM-based feature extraction approaches also face a key challenge: how to ensure that LLMs possess recommendation-aligned reasoning capabilities and can generate accurate, personalized reasons to mitigate the semantic-collaborative gap. To address these issues, we propose a novel Content Understanding from a Collaborative Perspective framework (CURec), which generates collaborative-aligned content features for more comprehensive recommendations. \method first aligns the LLM with recommendation objectives through pretraining, equipping it with instruction-following and chain-of-thought reasoning capabilities. Next, we design a reward model inspired by traditional recommendation architectures to evaluate the quality of the recommendation reasons generated by the LLM. Finally, using the reward signals, CURec fine-tunes the LLM through RL and corrects the generated reasons to ensure their accuracy. The corrected reasons are then integrated into a downstream recommender model to enhance comprehensibility and recommendation performance. Extensive experiments on public benchmarks demonstrate the superiority of CURec over existing methods.
Related papers
- Reasoning to Rank: An End-to-End Solution for Exploiting Large Language Models for Recommendation [44.51582748617213]
Reasoning to Rank is an end-to-end training framework that internalizes recommendation utility optimization into the learning of step-by-step reasoning in language models.<n>Our framework performs reasoning at the user-item level and employs reinforcement learning for end-to-end training of the language model.
arXiv Detail & Related papers (2026-02-13T02:22:48Z) - CARE: Contextual Adaptation of Recommenders for LLM-based Conversational Recommendation [66.51329063956538]
We introduce the CARE (Contextual Adaptation of Recommenders) framework.<n> CARE customizes large language models for CRS tasks, and synergizes them with external recommendation systems.<n>Our results demonstrate that incorporating external recommender systems with entity-level information significantly enhances recommendation accuracy of CRS.
arXiv Detail & Related papers (2025-08-19T14:53:30Z) - $\text{R}^2\text{ec}$: Towards Large Recommender Models with Reasoning [50.291998724376654]
We propose name, a unified large recommender model with intrinsic reasoning capabilities.<n> RecPO is a corresponding reinforcement learning framework that optimize name both the reasoning and recommendation capabilities simultaneously in a single policy update.<n> Experiments on three datasets with various baselines verify the effectiveness of name, showing relative improvements of 68.67% in Hit@5 and 45.21% in NDCG@20.
arXiv Detail & Related papers (2025-05-22T17:55:43Z) - Improving LLM Interpretability and Performance via Guided Embedding Refinement for Sequential Recommendation [18.13513199455587]
We propose guided embedding refinement to enhance the embeddings associated with the base recommendation system.<n>We generate guided embeddings that capture domain-relevant semantic information on interpretable attributes.<n>The refined embedding achieves approximately $10%$ to $50%$ gains in Mean Reciprocal Rank (MRR), Recall rate, and Normalized Discounted Cumulative Gain (NDCG)
arXiv Detail & Related papers (2025-04-15T23:03:53Z) - Reason4Rec: Large Language Models for Recommendation with Deliberative User Preference Alignment [69.11529841118671]
We propose a new Deliberative Recommendation task, which incorporates explicit reasoning about user preferences as an additional alignment goal.<n>We then introduce the Reasoning-powered Recommender framework for deliberative user preference alignment.
arXiv Detail & Related papers (2025-02-04T07:17:54Z) - Semantic Convergence: Harmonizing Recommender Systems via Two-Stage Alignment and Behavioral Semantic Tokenization [10.47505806629852]
Large language models (LLMs) are adept at discerning profound user interests from historical behaviors.<n>We propose a novel framework that harmoniously merges traditional recommendation models with the prowess of LLMs.<n>We design a series of specialized supervised learning tasks aimed at aligning collaborative signals with the subtleties of natural language semantics.
arXiv Detail & Related papers (2024-12-18T12:07:58Z) - Direct Preference Optimization for LLM-Enhanced Recommendation Systems [33.54698201942643]
Large Language Models (LLMs) have exhibited remarkable performance across a wide range of domains.<n>We propose DPO4Rec, a framework that integrates DPO into LLM-enhanced recommendation systems.<n>Extensive experiments show that DPO4Rec significantly improves re-ranking performance over strong baselines.
arXiv Detail & Related papers (2024-10-08T11:42:37Z) - LLMRec: Benchmarking Large Language Models on Recommendation Task [54.48899723591296]
The application of Large Language Models (LLMs) in the recommendation domain has not been thoroughly investigated.
We benchmark several popular off-the-shelf LLMs on five recommendation tasks, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization.
The benchmark results indicate that LLMs displayed only moderate proficiency in accuracy-based tasks such as sequential and direct recommendation.
arXiv Detail & Related papers (2023-08-23T16:32:54Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.