Large Language Models as Recommender Systems: A Study of Popularity Bias
- URL: http://arxiv.org/abs/2406.01285v1
- Date: Mon, 3 Jun 2024 12:53:37 GMT
- Title: Large Language Models as Recommender Systems: A Study of Popularity Bias
- Authors: Jan Malte Lichtenberg, Alexander Buchholz, Pola Schwöbel,
- Abstract summary: Popular items are disproportionately recommended, overshadowing less popular but potentially relevant items.
Recent advancements have seen the integration of general-purpose Large Language Models into recommender systems.
Our study explores whether LLMs contribute to or can alleviate popularity bias in recommender systems.
- Score: 46.17953988777199
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The issue of popularity bias -- where popular items are disproportionately recommended, overshadowing less popular but potentially relevant items -- remains a significant challenge in recommender systems. Recent advancements have seen the integration of general-purpose Large Language Models (LLMs) into the architecture of such systems. This integration raises concerns that it might exacerbate popularity bias, given that the LLM's training data is likely dominated by popular items. However, it simultaneously presents a novel opportunity to address the bias via prompt tuning. Our study explores this dichotomy, examining whether LLMs contribute to or can alleviate popularity bias in recommender systems. We introduce a principled way to measure popularity bias by discussing existing metrics and proposing a novel metric that fulfills a series of desiderata. Based on our new metric, we compare a simple LLM-based recommender to traditional recommender systems on a movie recommendation task. We find that the LLM recommender exhibits less popularity bias, even without any explicit mitigation.
Related papers
- Cognitive Biases in Large Language Models for News Recommendation [68.90354828533535]
This paper explores the potential impact of cognitive biases on large language models (LLMs) based news recommender systems.
We discuss strategies to mitigate these biases through data augmentation, prompt engineering and learning algorithms aspects.
arXiv Detail & Related papers (2024-10-03T18:42:07Z) - GPTBIAS: A Comprehensive Framework for Evaluating Bias in Large Language
Models [83.30078426829627]
Large language models (LLMs) have gained popularity and are being widely adopted by a large user community.
The existing evaluation methods have many constraints, and their results exhibit a limited degree of interpretability.
We propose a bias evaluation framework named GPTBIAS that leverages the high performance of LLMs to assess bias in models.
arXiv Detail & Related papers (2023-12-11T12:02:14Z) - Metrics for popularity bias in dynamic recommender systems [0.0]
Biased recommendations may lead to decisions that can potentially have adverse effects on individuals, sensitive user groups, and society.
This paper focuses on quantifying popularity bias that stems directly from the output of RecSys models.
Four metrics to quantify popularity bias in RescSys over time in dynamic setting across different sensitive user groups have been proposed.
arXiv Detail & Related papers (2023-10-12T16:15:30Z) - Test Time Embedding Normalization for Popularity Bias Mitigation [6.145760252113906]
Popularity bias is a widespread problem in the field of recommender systems.
We propose 'Test Time Embedding Normalization' as a simple yet effective strategy for mitigating popularity bias.
arXiv Detail & Related papers (2023-08-22T08:57:44Z) - A Survey on Popularity Bias in Recommender Systems [5.952279576277445]
We discuss the potential reasons for popularity bias and review existing approaches to detect, mitigate and quantify popularity bias in recommender systems.
We critically discuss todays literature, where we observe that the research is almost entirely based on computational experiments and on certain assumptions regarding the practical effects of including long-tail items in the recommendations.
arXiv Detail & Related papers (2023-08-02T12:58:11Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Large Language Models are Zero-Shot Rankers for Recommender Systems [76.02500186203929]
This work aims to investigate the capacity of large language models (LLMs) to act as the ranking model for recommender systems.
We show that LLMs have promising zero-shot ranking abilities but struggle to perceive the order of historical interactions.
We demonstrate that these issues can be alleviated using specially designed prompting and bootstrapping strategies.
arXiv Detail & Related papers (2023-05-15T17:57:39Z) - The Unfairness of Popularity Bias in Book Recommendation [0.0]
Popularity bias refers to the problem that popular items are recommended frequently while less popular items are recommended rarely or not at all.
We analyze the well-known Book-Crossing dataset and define three user groups based on their tendency towards popular items.
Our results indicate that most state-of-the-art recommendation algorithms suffer from popularity bias in the book domain.
arXiv Detail & Related papers (2022-02-27T20:21:46Z) - An Adaptive Boosting Technique to Mitigate Popularity Bias in
Recommender System [1.5800354337004194]
A typical accuracy measure is biased towards popular items, i.e., it promotes better accuracy for popular items compared to non-popular items.
This paper considers a metric that measures the popularity bias as the difference in error on popular items and non-popular items.
Motivated by the fair boosting algorithm on classification, we propose an algorithm that reduces the popularity bias present in the data.
arXiv Detail & Related papers (2021-09-13T03:04:55Z) - Contrastive Learning for Debiased Candidate Generation in Large-Scale
Recommender Systems [84.3996727203154]
We show that a popular choice of contrastive loss is equivalent to reducing the exposure bias via inverse propensity weighting.
We further improve upon CLRec and propose Multi-CLRec, for accurate multi-intention aware bias reduction.
Our methods have been successfully deployed in Taobao, where at least four-month online A/B tests and offline analyses demonstrate its substantial improvements.
arXiv Detail & Related papers (2020-05-20T08:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.