Empowering Few-Shot Recommender Systems with Large Language Models --
Enhanced Representations
- URL: http://arxiv.org/abs/2312.13557v1
- Date: Thu, 21 Dec 2023 03:50:09 GMT
- Title: Empowering Few-Shot Recommender Systems with Large Language Models --
Enhanced Representations
- Authors: Zhoumeng Wang
- Abstract summary: Large language models (LLMs) offer novel insights into tackling the few-shot scenarios encountered by explicit feedback-based recommender systems.
Our study can inspire researchers to delve deeper into the multifaceted dimensions of LLMs's involvement in recommender systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems utilizing explicit feedback have witnessed significant
advancements and widespread applications over the past years. However,
generating recommendations in few-shot scenarios remains a persistent
challenge. Recently, large language models (LLMs) have emerged as a promising
solution for addressing natural language processing (NLP) tasks, thereby
offering novel insights into tackling the few-shot scenarios encountered by
explicit feedback-based recommender systems. To bridge recommender systems and
LLMs, we devise a prompting template that generates user and item
representations based on explicit feedback. Subsequently, we integrate these
LLM-processed representations into various recommendation models to evaluate
their significance across diverse recommendation tasks. Our ablation
experiments and case study analysis collectively demonstrate the effectiveness
of LLMs in processing explicit feedback, highlighting that LLMs equipped with
generative and logical reasoning capabilities can effectively serve as a
component of recommender systems to enhance their performance in few-shot
scenarios. Furthermore, the broad adaptability of LLMs augments the
generalization potential of recommender models, despite certain inherent
constraints. We anticipate that our study can inspire researchers to delve
deeper into the multifaceted dimensions of LLMs's involvement in recommender
systems and contribute to the advancement of the explicit feedback-based
recommender systems field.
Related papers
- Towards Scalable Semantic Representation for Recommendation [65.06144407288127]
Mixture-of-Codes is proposed to construct semantic IDs based on large language models (LLMs)
Our method achieves superior discriminability and dimension robustness scalability, leading to the best scale-up performance in recommendations.
arXiv Detail & Related papers (2024-10-12T15:10:56Z) - Towards Next-Generation LLM-based Recommender Systems: A Survey and Beyond [41.08716571288641]
We introduce a novel taxonomy that originates from the intrinsic essence of recommendation.
We propose a three-tier structure that more accurately reflects the developmental progression of recommendation systems.
arXiv Detail & Related papers (2024-10-10T08:22:04Z) - Large Language Model Empowered Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the potential to understand the semantic connections between items, regardless of their popularity.
We present LLMEmb, an innovative technique that harnesses LLM to create item embeddings that bolster the performance of Sequential Recommender Systems.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - All Roads Lead to Rome: Unveiling the Trajectory of Recommender Systems Across the LLM Era [63.649070507815715]
We aim to integrate recommender systems into a broader picture, and pave the way for more comprehensive solutions for future research.
We identify two evolution paths of modern recommender systems -- via list-wise recommendation and conversational recommendation.
We point out that the information effectiveness of the recommendation is increased, while the user's acquisition cost is decreased.
arXiv Detail & Related papers (2024-07-14T05:02:21Z) - LANE: Logic Alignment of Non-tuning Large Language Models and Online Recommendation Systems for Explainable Reason Generation [15.972926854420619]
Leveraging large language models (LLMs) offers new opportunities for comprehensive recommendation logic generation.
Fine-tuning LLM models for recommendation tasks incurs high computational costs and alignment issues with existing systems.
In this work, our proposed effective strategy LANE aligns LLMs with online recommendation systems without additional LLMs tuning.
arXiv Detail & Related papers (2024-07-03T06:20:31Z) - Exploring the Impact of Large Language Models on Recommender Systems: An Extensive Review [2.780460221321639]
The paper underscores the significance of Large Language Models in reshaping recommender systems.
LLMs exhibit exceptional proficiency in recommending items, showcasing their adeptness in comprehending intricacies of language.
Despite their transformative potential, challenges persist, including sensitivity to input prompts, occasional misinterpretations, and unforeseen recommendations.
arXiv Detail & Related papers (2024-02-11T00:24:17Z) - LLMRec: Benchmarking Large Language Models on Recommendation Task [54.48899723591296]
The application of Large Language Models (LLMs) in the recommendation domain has not been thoroughly investigated.
We benchmark several popular off-the-shelf LLMs on five recommendation tasks, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization.
The benchmark results indicate that LLMs displayed only moderate proficiency in accuracy-based tasks such as sequential and direct recommendation.
arXiv Detail & Related papers (2023-08-23T16:32:54Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.