Empowering Few-Shot Recommender Systems with Large Language Models --
Enhanced Representations
- URL: http://arxiv.org/abs/2312.13557v1
- Date: Thu, 21 Dec 2023 03:50:09 GMT
- Title: Empowering Few-Shot Recommender Systems with Large Language Models --
Enhanced Representations
- Authors: Zhoumeng Wang
- Abstract summary: Large language models (LLMs) offer novel insights into tackling the few-shot scenarios encountered by explicit feedback-based recommender systems.
Our study can inspire researchers to delve deeper into the multifaceted dimensions of LLMs's involvement in recommender systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems utilizing explicit feedback have witnessed significant
advancements and widespread applications over the past years. However,
generating recommendations in few-shot scenarios remains a persistent
challenge. Recently, large language models (LLMs) have emerged as a promising
solution for addressing natural language processing (NLP) tasks, thereby
offering novel insights into tackling the few-shot scenarios encountered by
explicit feedback-based recommender systems. To bridge recommender systems and
LLMs, we devise a prompting template that generates user and item
representations based on explicit feedback. Subsequently, we integrate these
LLM-processed representations into various recommendation models to evaluate
their significance across diverse recommendation tasks. Our ablation
experiments and case study analysis collectively demonstrate the effectiveness
of LLMs in processing explicit feedback, highlighting that LLMs equipped with
generative and logical reasoning capabilities can effectively serve as a
component of recommender systems to enhance their performance in few-shot
scenarios. Furthermore, the broad adaptability of LLMs augments the
generalization potential of recommender models, despite certain inherent
constraints. We anticipate that our study can inspire researchers to delve
deeper into the multifaceted dimensions of LLMs's involvement in recommender
systems and contribute to the advancement of the explicit feedback-based
recommender systems field.
Related papers
- Generative Large Recommendation Models: Emerging Trends in LLMs for Recommendation [85.52251362906418]
This tutorial explores two primary approaches for integrating large language models (LLMs)
It provides a comprehensive overview of generative large recommendation models, including their recent advancements, challenges, and potential research directions.
Key topics include data quality, scaling laws, user behavior mining, and efficiency in training and inference.
arXiv Detail & Related papers (2025-02-19T14:48:25Z) - Reason4Rec: Large Language Models for Recommendation with Deliberative User Preference Alignment [69.11529841118671]
We propose a new Deliberative Recommendation task, which incorporates explicit reasoning about user preferences as an additional alignment goal.
We then introduce the Reasoning-powered Recommender framework for deliberative user preference alignment.
arXiv Detail & Related papers (2025-02-04T07:17:54Z) - Towards Next-Generation LLM-based Recommender Systems: A Survey and Beyond [41.08716571288641]
We introduce a novel taxonomy that originates from the intrinsic essence of recommendation.
We propose a three-tier structure that more accurately reflects the developmental progression of recommendation systems.
arXiv Detail & Related papers (2024-10-10T08:22:04Z) - All Roads Lead to Rome: Unveiling the Trajectory of Recommender Systems Across the LLM Era [63.649070507815715]
We aim to integrate recommender systems into a broader picture, and pave the way for more comprehensive solutions for future research.
We identify two evolution paths of modern recommender systems -- via list-wise recommendation and conversational recommendation.
We point out that the information effectiveness of the recommendation is increased, while the user's acquisition cost is decreased.
arXiv Detail & Related papers (2024-07-14T05:02:21Z) - LANE: Logic Alignment of Non-tuning Large Language Models and Online Recommendation Systems for Explainable Reason Generation [15.972926854420619]
Leveraging large language models (LLMs) offers new opportunities for comprehensive recommendation logic generation.
Fine-tuning LLM models for recommendation tasks incurs high computational costs and alignment issues with existing systems.
In this work, our proposed effective strategy LANE aligns LLMs with online recommendation systems without additional LLMs tuning.
arXiv Detail & Related papers (2024-07-03T06:20:31Z) - Exploring the Impact of Large Language Models on Recommender Systems: An Extensive Review [2.780460221321639]
The paper underscores the significance of Large Language Models in reshaping recommender systems.
LLMs exhibit exceptional proficiency in recommending items, showcasing their adeptness in comprehending intricacies of language.
Despite their transformative potential, challenges persist, including sensitivity to input prompts, occasional misinterpretations, and unforeseen recommendations.
arXiv Detail & Related papers (2024-02-11T00:24:17Z) - Tapping the Potential of Large Language Models as Recommender Systems: A Comprehensive Framework and Empirical Analysis [91.5632751731927]
Large Language Models such as ChatGPT have showcased remarkable abilities in solving general tasks.
We propose a general framework for utilizing LLMs in recommendation tasks, focusing on the capabilities of LLMs as recommenders.
We analyze the impact of public availability, tuning strategies, model architecture, parameter scale, and context length on recommendation results.
arXiv Detail & Related papers (2024-01-10T08:28:56Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.