Large Language Models are Competitive Near Cold-start Recommenders for
Language- and Item-based Preferences
- URL: http://arxiv.org/abs/2307.14225v1
- Date: Wed, 26 Jul 2023 14:47:15 GMT
- Title: Large Language Models are Competitive Near Cold-start Recommenders for
Language- and Item-based Preferences
- Authors: Scott Sanner and Krisztian Balog and Filip Radlinski and Ben Wedin and
Lucas Dixon
- Abstract summary: dialog interfaces that allow users to express language-based preferences offer a fundamentally different modality for preference input.
Inspired by recent successes of prompting paradigms for large language models (LLMs), we study their use for making recommendations.
- Score: 33.81337282939615
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations.
Related papers
- Preference Discerning with LLM-Enhanced Generative Retrieval [28.309905847867178]
We propose a new paradigm, which we term preference discerning.
In preference dscerning, we explicitly condition a generative sequential recommendation system on user preferences within its context.
We generate user preferences using Large Language Models (LLMs) based on user reviews and item-specific data.
arXiv Detail & Related papers (2024-12-11T18:26:55Z) - ULMRec: User-centric Large Language Model for Sequential Recommendation [16.494996929730927]
We propose ULMRec, a framework that integrates user personalized preferences into Large Language Models.
Extensive experiments on two public datasets demonstrate that ULMRec significantly outperforms existing methods.
arXiv Detail & Related papers (2024-12-07T05:37:00Z) - Align-SLM: Textless Spoken Language Models with Reinforcement Learning from AI Feedback [50.84142264245052]
This work introduces the Align-SLM framework to enhance the semantic understanding of textless Spoken Language Models (SLMs)
Our approach generates multiple speech continuations from a given prompt and uses semantic metrics to create preference data for Direct Preference Optimization (DPO)
We evaluate the framework using ZeroSpeech 2021 benchmarks for lexical and syntactic modeling, the spoken version of the StoryCloze dataset for semantic coherence, and other speech generation metrics, including the GPT4-o score and human evaluation.
arXiv Detail & Related papers (2024-11-04T06:07:53Z) - ComPO: Community Preferences for Language Model Personalization [122.54846260663922]
ComPO is a method to personalize preference optimization in language models.
We collect and release ComPRed, a question answering dataset with community-level preferences from Reddit.
arXiv Detail & Related papers (2024-10-21T14:02:40Z) - Language Representations Can be What Recommenders Need: Findings and Potentials [57.90679739598295]
We show that item representations, when linearly mapped from advanced LM representations, yield superior recommendation performance.
This outcome suggests the possible homomorphism between the advanced language representation space and an effective item representation space for recommendation.
Our findings highlight the connection between language modeling and behavior modeling, which can inspire both natural language processing and recommender system communities.
arXiv Detail & Related papers (2024-07-07T17:05:24Z) - Comparing Bad Apples to Good Oranges: Aligning Large Language Models via Joint Preference Optimization [105.3612692153615]
We propose a new axis based on eliciting preferences jointly over instruction-response pairs.
Joint preferences over instruction and response pairs can significantly enhance the alignment of large language models.
arXiv Detail & Related papers (2024-03-31T02:05:40Z) - Parameter-Efficient Conversational Recommender System as a Language
Processing Task [52.47087212618396]
Conversational recommender systems (CRS) aim to recommend relevant items to users by eliciting user preference through natural language conversation.
Prior work often utilizes external knowledge graphs for items' semantic information, a language model for dialogue generation, and a recommendation module for ranking relevant items.
In this paper, we represent items in natural language and formulate CRS as a natural language processing task.
arXiv Detail & Related papers (2024-01-25T14:07:34Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z) - Beyond Single Items: Exploring User Preferences in Item Sets with the
Conversational Playlist Curation Dataset [20.42354123651454]
We call this task conversational item set curation.
We present a novel data collection methodology that efficiently collects realistic preferences about item sets in a conversational setting.
We show that it leads raters to express preferences that would not be otherwise expressed.
arXiv Detail & Related papers (2023-03-13T00:39:04Z) - Discovering Personalized Semantics for Soft Attributes in Recommender
Systems using Concept Activation Vectors [34.56323846959459]
Interactive recommender systems allow users to express intent, preferences, constraints, and contexts in a richer fashion.
One challenge is inferring a user's semantic intent from the open-ended terms or attributes often used to describe a desired item.
We develop a framework to learn a representation that captures the semantics of such attributes and connects them to user preferences and behaviors in recommender systems.
arXiv Detail & Related papers (2022-02-06T18:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.