ChatGPT for Conversational Recommendation: Refining Recommendations by
Reprompting with Feedback
- URL: http://arxiv.org/abs/2401.03605v1
- Date: Sun, 7 Jan 2024 23:17:42 GMT
- Title: ChatGPT for Conversational Recommendation: Refining Recommendations by
Reprompting with Feedback
- Authors: Kyle Dylan Spurlock, Cagla Acun, Esin Saka and Olfa Nasraoui
- Abstract summary: Large Language Models (LLMs) like ChatGPT have gained popularity due to their ease of use and their ability to adapt dynamically to various tasks while responding to feedback.
We build a rigorous pipeline around ChatGPT to simulate how a user might realistically probe the model for recommendations.
We explore the effect of popularity bias in ChatGPT's recommendations, and compare its performance to baseline models.
- Score: 1.3654846342364308
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recommendation algorithms have been pivotal in handling the overwhelming
volume of online content. However, these algorithms seldom consider direct user
input, resulting in superficial interaction between them. Efforts have been
made to include the user directly in the recommendation process through
conversation, but these systems too have had limited interactivity. Recently,
Large Language Models (LLMs) like ChatGPT have gained popularity due to their
ease of use and their ability to adapt dynamically to various tasks while
responding to feedback. In this paper, we investigate the effectiveness of
ChatGPT as a top-n conversational recommendation system. We build a rigorous
pipeline around ChatGPT to simulate how a user might realistically probe the
model for recommendations: by first instructing and then reprompting with
feedback to refine a set of recommendations. We further explore the effect of
popularity bias in ChatGPT's recommendations, and compare its performance to
baseline models. We find that reprompting ChatGPT with feedback is an effective
strategy to improve recommendation relevancy, and that popularity bias can be
mitigated through prompt engineering.
Related papers
- Prompt Optimization with Human Feedback [69.95991134172282]
We study the problem of prompt optimization with human feedback (POHF)
We introduce our algorithm named automated POHF (APOHF)
The results demonstrate that our APOHF can efficiently find a good prompt using a small number of preference feedback instances.
arXiv Detail & Related papers (2024-05-27T16:49:29Z) - "Close...but not as good as an educator." -- Using ChatGPT to provide
formative feedback in large-class collaborative learning [0.0]
We employed ChatGPT to provide personalised formative feedback in a one-hour Zoom break-out room activity.
Half of the 44 survey respondents had never used ChatGPT before.
Only three groups used the feedback loop to improve their evaluation plans.
arXiv Detail & Related papers (2023-11-02T23:00:38Z) - Primacy Effect of ChatGPT [69.49920102917598]
We study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer.
We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions.
arXiv Detail & Related papers (2023-10-20T00:37:28Z) - Evaluating ChatGPT as a Recommender System: A Rigorous Approach [12.458752059072706]
We propose a robust evaluation pipeline to assess ChatGPT's ability as an RS and post-process ChatGPT recommendations.
We analyze the model's functionality in three settings: the Top-N Recommendation, the cold-start recommendation, and the re-ranking of a list of recommendations.
arXiv Detail & Related papers (2023-09-07T10:13:09Z) - Rethinking the Evaluation for Conversational Recommendation in the Era
of Large Language Models [115.7508325840751]
The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs)
In this paper, we embark on an investigation into the utilization of ChatGPT for conversational recommendation, revealing the inadequacy of the existing evaluation protocol.
We propose an interactive Evaluation approach based on LLMs named iEvaLM that harnesses LLM-based user simulators.
arXiv Detail & Related papers (2023-05-22T15:12:43Z) - Chat-REC: Towards Interactive and Explainable LLMs-Augmented Recommender
System [11.404192885921498]
Chat-Rec is a new paradigm for building conversational recommender systems.
Chat-Rec is effective in learning user preferences and establishing connections between users and products.
In experiments, Chat-Rec effectively improve the results of top-k recommendations and performs better in zero-shot rating prediction task.
arXiv Detail & Related papers (2023-03-25T17:37:43Z) - Aligning Recommendation and Conversation via Dual Imitation [56.236932446280825]
We propose DICR (Dual Imitation for Conversational Recommendation), which designs a dual imitation to explicitly align the recommendation paths and user interest shift paths.
By exchanging alignment signals, DICR achieves bidirectional promotion between recommendation and conversation modules.
Experiments demonstrate that DICR outperforms the state-of-the-art models on recommendation and conversation performance with automatic, human, and novel explainability metrics.
arXiv Detail & Related papers (2022-11-05T08:13:46Z) - Comparison-based Conversational Recommender System with Relative Bandit
Feedback [15.680698037463488]
We propose a novel comparison-based conversational recommender system.
We propose a new bandit algorithm, which we call RelativeConUCB.
The experiments on both synthetic and real-world datasets validate the advantage of our proposed method.
arXiv Detail & Related papers (2022-08-21T08:05:46Z) - Breaking Feedback Loops in Recommender Systems with Causal Inference [99.22185950608838]
Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
arXiv Detail & Related papers (2022-07-04T17:58:39Z) - Self-Supervised Bot Play for Conversational Recommendation with
Justifications [3.015622397986615]
We develop a new two-part framework for training conversational recommender systems.
First, we train a recommender system to jointly suggest items and justify its reasoning with subjective aspects.
We then fine-tune this model to incorporate iterative user feedback via self-supervised bot-play.
arXiv Detail & Related papers (2021-12-09T20:07:41Z) - Reward Constrained Interactive Recommendation with Natural Language
Feedback [158.8095688415973]
We propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time.
Specifically, we leverage a discriminator to detect recommendations violating user historical preference.
Our proposed framework is general and is further extended to the task of constrained text generation.
arXiv Detail & Related papers (2020-05-04T16:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.