Large Language Models are Competitive Near Cold-start Recommenders for
Language- and Item-based Preferences
- URL: http://arxiv.org/abs/2307.14225v1
- Date: Wed, 26 Jul 2023 14:47:15 GMT
- Title: Large Language Models are Competitive Near Cold-start Recommenders for
Language- and Item-based Preferences
- Authors: Scott Sanner and Krisztian Balog and Filip Radlinski and Ben Wedin and
Lucas Dixon
- Abstract summary: dialog interfaces that allow users to express language-based preferences offer a fundamentally different modality for preference input.
Inspired by recent successes of prompting paradigms for large language models (LLMs), we study their use for making recommendations.
- Score: 33.81337282939615
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional recommender systems leverage users' item preference history to
recommend novel content that users may like. However, modern dialog interfaces
that allow users to express language-based preferences offer a fundamentally
different modality for preference input. Inspired by recent successes of
prompting paradigms for large language models (LLMs), we study their use for
making recommendations from both item-based and language-based preferences in
comparison to state-of-the-art item-based collaborative filtering (CF) methods.
To support this investigation, we collect a new dataset consisting of both
item-based and language-based preferences elicited from users along with their
ratings on a variety of (biased) recommended items and (unbiased) random items.
Among numerous experimental results, we find that LLMs provide competitive
recommendation performance for pure language-based preferences (no item
preferences) in the near cold-start case in comparison to item-based CF
methods, despite having no supervised training for this specific task
(zero-shot) or only a few labels (few-shot). This is particularly promising as
language-based preference representations are more explainable and scrutable
than item-based or vector-based representations.
Related papers
- Language Models Encode Collaborative Signals in Recommendation [57.90679739598295]
We show that item representations, when linearly mapped from advanced LM representations, yield superior recommendation performance.
Motivated by these findings, we propose a simple yet effective collaborative filtering (CF) model named AlphaRec.
AlphaRec is comprised of three main components: a multilayer perceptron (MLP), graph convolution, and contrastive learning (CL) loss function.
arXiv Detail & Related papers (2024-07-07T17:05:24Z) - Reindex-Then-Adapt: Improving Large Language Models for Conversational Recommendation [50.19602159938368]
Large language models (LLMs) are revolutionizing conversational recommender systems.
We propose a Reindex-Then-Adapt (RTA) framework, which converts multi-token item titles into single tokens within LLMs.
Our framework demonstrates improved accuracy metrics across three different conversational recommendation datasets.
arXiv Detail & Related papers (2024-05-20T15:37:55Z) - Bayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation [18.550311424902358]
Large language models (LLMs) enable fully natural language (NL) PE dialogues.
We propose a novel NL-PE algorithm, PEBOL, which uses Natural Language Inference (NLI) between user preference utterances and NL item descriptions.
We numerically evaluate our methods in controlled experiments, finding that PEBOL achieves up to 131% improvement in MAP@10 after 10 turns of cold start NL-PE dialogue.
arXiv Detail & Related papers (2024-05-02T03:35:21Z) - Comparing Bad Apples to Good Oranges: Aligning Large Language Models via Joint Preference Optimization [105.3612692153615]
A common technique for aligning large language models (LLMs) relies on acquiring human preferences.
We propose a new axis that is based on eliciting preferences jointly over the instruction-response pairs.
We find that joint preferences over instruction and response pairs can significantly enhance the alignment of LLMs.
arXiv Detail & Related papers (2024-03-31T02:05:40Z) - Parameter-Efficient Conversational Recommender System as a Language
Processing Task [52.47087212618396]
Conversational recommender systems (CRS) aim to recommend relevant items to users by eliciting user preference through natural language conversation.
Prior work often utilizes external knowledge graphs for items' semantic information, a language model for dialogue generation, and a recommendation module for ranking relevant items.
In this paper, we represent items in natural language and formulate CRS as a natural language processing task.
arXiv Detail & Related papers (2024-01-25T14:07:34Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z) - Beyond Single Items: Exploring User Preferences in Item Sets with the
Conversational Playlist Curation Dataset [20.42354123651454]
We call this task conversational item set curation.
We present a novel data collection methodology that efficiently collects realistic preferences about item sets in a conversational setting.
We show that it leads raters to express preferences that would not be otherwise expressed.
arXiv Detail & Related papers (2023-03-13T00:39:04Z) - Talk the Walk: Synthetic Data Generation for Conversational Music
Recommendation [62.019437228000776]
We present TalkWalk, which generates realistic high-quality conversational data by leveraging encoded expertise in widely available item collections.
We generate over one million diverse conversations in a human-collected dataset.
arXiv Detail & Related papers (2023-01-27T01:54:16Z) - COLA: Improving Conversational Recommender Systems by Collaborative
Augmentation [9.99763097964222]
We propose a collaborative augmentation (COLA) method to improve both item representation learning and user preference modeling.
We construct an interactive user-item graph from all conversations, which augments item representations with user-aware information.
To improve user preference modeling, we retrieve similar conversations from the training corpus, where the involved items and attributes that reflect the user's potential interests are used to augment the user representation.
arXiv Detail & Related papers (2022-12-15T12:37:28Z) - Discovering Personalized Semantics for Soft Attributes in Recommender
Systems using Concept Activation Vectors [34.56323846959459]
Interactive recommender systems allow users to express intent, preferences, constraints, and contexts in a richer fashion.
One challenge is inferring a user's semantic intent from the open-ended terms or attributes often used to describe a desired item.
We develop a framework to learn a representation that captures the semantics of such attributes and connects them to user preferences and behaviors in recommender systems.
arXiv Detail & Related papers (2022-02-06T18:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.