Enhance Large Language Models as Recommendation Systems with Collaborative Filtering
- URL: http://arxiv.org/abs/2510.15647v1
- Date: Fri, 17 Oct 2025 13:35:14 GMT
- Title: Enhance Large Language Models as Recommendation Systems with Collaborative Filtering
- Authors: Zhisheng Yang, Xiaofei Xu, Ke Deng, Li Li,
- Abstract summary: This study proposes critique-based Large Language Models (LLMs) as recommendation systems (Critic-LLM-RS)<n>Critic-LLM-RS implements collaborative filtering for recommendations by learning from the interactions between many users and items.<n>Experiments have verified the effectiveness of Critic-LLM-RS on real datasets.
- Score: 9.697791766151958
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As powerful tools in Natural Language Processing (NLP), Large Language Models (LLMs) have been leveraged for crafting recommendations to achieve precise alignment with user preferences and elevate the quality of the recommendations. The existing approaches implement both non-tuning and tuning strategies. Compared to following the tuning strategy, the approaches following the non-tuning strategy avoid the relatively costly, time-consuming, and expertise-requiring process of further training pre-trained LLMs on task-specific datasets, but they suffer the issue of not having the task-specific business or local enterprise knowledge. To the best of our knowledge, none of the existing approaches following the non-tuning strategy explicitly integrates collaborative filtering, one of the most successful recommendation techniques. This study aims to fill the gap by proposing critique-based LLMs as recommendation systems (Critic-LLM-RS). For our purpose, we train a separate machine-learning model called Critic that implements collaborative filtering for recommendations by learning from the interactions between many users and items. The Critic provides critiques to LLMs to significantly refine the recommendations. Extensive experiments have verified the effectiveness of Critic-LLM-RS on real datasets.
Related papers
- Reinforced Strategy Optimization for Conversational Recommender Systems via Network-of-Experts [63.412646471177645]
We propose a novel Reinforced Strategy Optimization (RSO) method for Conversational Recommender Systems (CRSs)<n>RSO decomposes the process of generating strategy-driven response decisions into the macro-level strategy planning and micro-level strategy adaptation.<n>Experiments show that RSO significantly improves interaction performance compared to state-of-the-art baselines.
arXiv Detail & Related papers (2025-09-30T11:12:01Z) - CARE: Contextual Adaptation of Recommenders for LLM-based Conversational Recommendation [66.51329063956538]
We introduce the CARE (Contextual Adaptation of Recommenders) framework.<n> CARE customizes large language models for CRS tasks, and synergizes them with external recommendation systems.<n>Our results demonstrate that incorporating external recommender systems with entity-level information significantly enhances recommendation accuracy of CRS.
arXiv Detail & Related papers (2025-08-19T14:53:30Z) - Large Language Model-Enhanced Reinforcement Learning for Diverse and Novel Recommendations [6.949170757786365]
We propose LAAC (LLM-guided Adversarial Actor Critic), a novel method that leverages large language models to suggest novel items.<n>We show that LAAC outperforms existing baselines in diversity, novelty, and accuracy, while remaining robust on imbalanced data.
arXiv Detail & Related papers (2025-07-28T19:00:40Z) - DeepRec: Towards a Deep Dive Into the Item Space with Large Language Model Based Recommendation [83.21140655248624]
Large language models (LLMs) have been introduced into recommender systems (RSs)<n>We propose DeepRec, a novel LLM-based RS that enables autonomous multi-turn interactions between LLMs and TRMs for deep exploration of the item space.<n> Experiments on public datasets demonstrate that DeepRec significantly outperforms both traditional and LLM-based baselines.
arXiv Detail & Related papers (2025-05-22T15:49:38Z) - Reason4Rec: Large Language Models for Recommendation with Deliberative User Preference Alignment [69.11529841118671]
We propose a new Deliberative Recommendation task, which incorporates explicit reasoning about user preferences as an additional alignment goal.<n>We then introduce the Reasoning-powered Recommender framework for deliberative user preference alignment.
arXiv Detail & Related papers (2025-02-04T07:17:54Z) - Enhanced Recommendation Combining Collaborative Filtering and Large Language Models [0.0]
Large Language Models (LLMs) provide a new breakthrough for recommendation systems.<n>This paper proposes an enhanced recommendation method that combines collaborative filtering and LLMs.<n>The results show that the hybrid model based on collaborative filtering and LLMs significantly improves precision, recall, and user satisfaction.
arXiv Detail & Related papers (2024-12-25T00:23:53Z) - Real-Time Personalization for LLM-based Recommendation with Customized In-Context Learning [57.28766250993726]
This work explores adapting to dynamic user interests without any model updates.
Existing Large Language Model (LLM)-based recommenders often lose the in-context learning ability during recommendation tuning.
We propose RecICL, which customizes recommendation-specific in-context learning for real-time recommendations.
arXiv Detail & Related papers (2024-10-30T15:48:36Z) - STAR: A Simple Training-free Approach for Recommendations using Large Language Models [36.18841135511487]
Current state-of-the-art methods rely on fine-tuning large language models (LLMs) to achieve optimal results.<n>We propose a framework that utilizes LLMs and can be applied to various recommendation tasks without the need for fine-tuning.<n>Our method achieves Hits@10 performance of +23.8% on Beauty, +37.5% on Toys & Games, and -1.8% on Sports & Outdoors.
arXiv Detail & Related papers (2024-10-21T19:34:40Z) - LANE: Logic Alignment of Non-tuning Large Language Models and Online Recommendation Systems for Explainable Reason Generation [15.972926854420619]
Leveraging large language models (LLMs) offers new opportunities for comprehensive recommendation logic generation.
Fine-tuning LLM models for recommendation tasks incurs high computational costs and alignment issues with existing systems.
In this work, our proposed effective strategy LANE aligns LLMs with online recommendation systems without additional LLMs tuning.
arXiv Detail & Related papers (2024-07-03T06:20:31Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.