Embedding Cultural Diversity in Prototype-based Recommender Systems
- URL: http://arxiv.org/abs/2412.14329v1
- Date: Wed, 18 Dec 2024 20:57:33 GMT
- Title: Embedding Cultural Diversity in Prototype-based Recommender Systems
- Authors: Armin Moradi, Nicola Neophytou, Florian Carichon, Golnoosh Farnadi,
- Abstract summary: Popularity bias in recommender systems can increase cultural overrepresentation by favoring norms from dominant cultures.
In this work, we address popularity bias by identifying demographic biases within prototype-based matrix factorization methods.
Our results demonstrate a 27% reduction in the average rank of long-tail items and a 2% reduction in the average rank of items from underrepresented countries.
- Score: 3.9148550258086843
- License:
- Abstract: Popularity bias in recommender systems can increase cultural overrepresentation by favoring norms from dominant cultures and marginalizing underrepresented groups. This issue is critical for platforms offering cultural products, as they influence consumption patterns and human perceptions. In this work, we address popularity bias by identifying demographic biases within prototype-based matrix factorization methods. Using the country of origin as a proxy for cultural identity, we link this demographic attribute to popularity bias by refining the embedding space learning process. First, we propose filtering out irrelevant prototypes to improve representativity. Second, we introduce a regularization technique to enforce a uniform distribution of prototypes within the embedding space. Across four datasets, our results demonstrate a 27\% reduction in the average rank of long-tail items and a 2\% reduction in the average rank of items from underrepresented countries. Additionally, our model achieves a 2\% improvement in HitRatio@10 compared to the state-of-the-art, highlighting that fairness is enhanced without compromising recommendation quality. Moreover, the distribution of prototypes leads to more inclusive explanations by better aligning items with diverse prototypes.
Related papers
- ComPO: Community Preferences for Language Model Personalization [122.54846260663922]
ComPO is a method to personalize preference optimization in language models.
We collect and release ComPRed, a question answering dataset with community-level preferences from Reddit.
arXiv Detail & Related papers (2024-10-21T14:02:40Z) - Score Normalization for Demographic Fairness in Face Recognition [16.421833444307232]
Well-known sample-centered score normalization techniques, Z-norm and T-norm, do not improve fairness for high-security operating points.
We extend the standard Z/T-norm to integrate demographic information in normalization.
We show that our techniques generally improve the overall fairness of five state-of-the-art pre-trained face recognition networks.
arXiv Detail & Related papers (2024-07-19T07:51:51Z) - Advancing Cultural Inclusivity: Optimizing Embedding Spaces for Balanced Music Recommendations [4.276697874428501]
Popularity bias in music recommendation systems can propagate along demographic and cultural axes.
We identify these biases in recommendations for artists from underrepresented cultural groups in prototype-based matrix factorization methods.
Our results demonstrate significant improvements in reducing popularity bias and enhancing demographic and cultural fairness in music recommendations.
arXiv Detail & Related papers (2024-05-27T19:12:53Z) - Leveraging Prototypical Representations for Mitigating Social Bias without Demographic Information [50.29934517930506]
DAFair is a novel approach to address social bias in language models.
We leverage prototypical demographic texts and incorporate a regularization term during the fine-tuning process to mitigate bias.
arXiv Detail & Related papers (2024-03-14T15:58:36Z) - Fairness-Aware Structured Pruning in Transformers [14.439885480035324]
We investigate how attention heads impact fairness and performance in pre-trained language models.
We propose a novel method to prune the attention heads that negatively impact fairness while retaining the heads critical for performance.
Our findings demonstrate a reduction in gender bias by 19%, 19.5%, 39.5%, 34.7%, 23%, and 8% for DistilGPT-2, GPT-2, GPT-Neo, and Llama 2 models.
arXiv Detail & Related papers (2023-12-24T03:57:52Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - A Robust Bias Mitigation Procedure Based on the Stereotype Content Model [0.0]
We adapt existing work to demonstrate that the Stereotype Content model holds for contextualised word embeddings.
We find the SCM terms are better able to capture bias than demographic terms related to pleasantness.
We present this work as a prototype of a debiasing procedure that aims to remove the need for a priori knowledge of the specifics of bias in the model.
arXiv Detail & Related papers (2022-10-26T08:13:58Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Deep Learning feature selection to unhide demographic recommender
systems factors [63.732639864601914]
The matrix factorization model generates factors which do not incorporate semantic knowledge.
DeepUnHide is able to extract demographic information from the users and items factors in collaborative filtering recommender systems.
arXiv Detail & Related papers (2020-06-17T17:36:48Z) - Enhancing Facial Data Diversity with Style-based Face Aging [59.984134070735934]
In particular, face datasets are typically biased in terms of attributes such as gender, age, and race.
We propose a novel, generative style-based architecture for data augmentation that captures fine-grained aging patterns.
We show that the proposed method outperforms state-of-the-art algorithms for age transfer.
arXiv Detail & Related papers (2020-06-06T21:53:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.