Test Time Embedding Normalization for Popularity Bias Mitigation
- URL: http://arxiv.org/abs/2308.11288v2
- Date: Fri, 1 Sep 2023 07:17:54 GMT
- Title: Test Time Embedding Normalization for Popularity Bias Mitigation
- Authors: Dain Kim, Jinhyeok Park, Dongwoo Kim
- Abstract summary: Popularity bias is a widespread problem in the field of recommender systems.
We propose 'Test Time Embedding Normalization' as a simple yet effective strategy for mitigating popularity bias.
- Score: 6.145760252113906
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Popularity bias is a widespread problem in the field of recommender systems,
where popular items tend to dominate recommendation results. In this work, we
propose 'Test Time Embedding Normalization' as a simple yet effective strategy
for mitigating popularity bias, which surpasses the performance of the previous
mitigation approaches by a significant margin. Our approach utilizes the
normalized item embedding during the inference stage to control the influence
of embedding magnitude, which is highly correlated with item popularity.
Through extensive experiments, we show that our method combined with the
sampled softmax loss effectively reduces popularity bias compare to previous
approaches for bias mitigation. We further investigate the relationship between
user and item embeddings and find that the angular similarity between
embeddings distinguishes preferable and non-preferable items regardless of
their popularity. The analysis explains the mechanism behind the success of our
approach in eliminating the impact of popularity bias. Our code is available at
https://github.com/ml-postech/TTEN.
Related papers
- Large Language Models as Recommender Systems: A Study of Popularity Bias [46.17953988777199]
Popular items are disproportionately recommended, overshadowing less popular but potentially relevant items.
Recent advancements have seen the integration of general-purpose Large Language Models into recommender systems.
Our study explores whether LLMs contribute to or can alleviate popularity bias in recommender systems.
arXiv Detail & Related papers (2024-06-03T12:53:37Z) - Popularity-Aware Alignment and Contrast for Mitigating Popularity Bias [34.006766098392525]
Collaborative Filtering (CF) typically suffers from the challenge of popularity bias due to the uneven distribution of items in real-world datasets.
This bias leads to a significant accuracy gap between popular and unpopular items.
We propose Popularity-Aware Alignment and Contrast (PAAC) to address two challenges.
arXiv Detail & Related papers (2024-05-31T09:14:48Z) - Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction [56.17020601803071]
Recent research shows that pre-trained language models (PLMs) suffer from "prompt bias" in factual knowledge extraction.
This paper aims to improve the reliability of existing benchmarks by thoroughly investigating and mitigating prompt bias.
arXiv Detail & Related papers (2024-03-15T02:04:35Z) - Robust Collaborative Filtering to Popularity Distribution Shift [56.78171423428719]
We present a simple yet effective debiasing strategy, PopGo, which quantifies and reduces the interaction-wise popularity shortcut without assumptions on the test data.
On both ID and OOD test sets, PopGo achieves significant gains over the state-of-the-art debiasing strategies.
arXiv Detail & Related papers (2023-10-16T04:20:52Z) - Ranking with Popularity Bias: User Welfare under Self-Amplification
Dynamics [19.59766711993837]
We propose and theoretically analyze a general mechanism by which item popularity, item quality, and position bias jointly impact user choice.
We show that naive popularity-biased recommenders induce linear regret by conflating item quality and popularity.
arXiv Detail & Related papers (2023-05-24T22:38:19Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - The Unfairness of Popularity Bias in Book Recommendation [0.0]
Popularity bias refers to the problem that popular items are recommended frequently while less popular items are recommended rarely or not at all.
We analyze the well-known Book-Crossing dataset and define three user groups based on their tendency towards popular items.
Our results indicate that most state-of-the-art recommendation algorithms suffer from popularity bias in the book domain.
arXiv Detail & Related papers (2022-02-27T20:21:46Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - An Adaptive Boosting Technique to Mitigate Popularity Bias in
Recommender System [1.5800354337004194]
A typical accuracy measure is biased towards popular items, i.e., it promotes better accuracy for popular items compared to non-popular items.
This paper considers a metric that measures the popularity bias as the difference in error on popular items and non-popular items.
Motivated by the fair boosting algorithm on classification, we propose an algorithm that reduces the popularity bias present in the data.
arXiv Detail & Related papers (2021-09-13T03:04:55Z) - User-centered Evaluation of Popularity Bias in Recommender Systems [4.30484058393522]
Recommendation and ranking systems suffer from popularity bias; the tendency of the algorithm to favor a few popular items while under-representing the majority of other items.
In this paper, we show the limitations of the existing metrics to evaluate popularity bias mitigation when we want to assess these algorithms from the users' perspective.
We present an effective approach that mitigates popularity bias from the user-centered point of view.
arXiv Detail & Related papers (2021-03-10T22:12:51Z) - Towards Debiasing NLU Models from Unknown Biases [70.31427277842239]
NLU models often exploit biases to achieve high dataset-specific performance without properly learning the intended task.
We present a self-debiasing framework that prevents models from mainly utilizing biases without knowing them in advance.
arXiv Detail & Related papers (2020-09-25T15:49:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.